View Full Transcript
Welcome to Morpheus podcast. Hey, listen, I'm Bill Alderson, your moderator, and I'm joined by Jim and Gus. I don't need to say their last names because you already know who they are. Listen, we are got a packed show today. Gus is going to kick us off with Claude versus OpenAI, the DoD arms race. Jim takes us through the Iran war and dives into betting. And I'm going to take you through the quantum defense roadmap. And then Gus is going to take us through singularity without empathy, genius or catastrophe. We have a cool show. I know you don't understand very much, but you're going to get clued in very shortly. So who's up there? I think Gus is up first, right? Yeah, I'm leading off here and I have a good segment and I'm happy to introduce it because it's so relevant to our lives right now. What's the biggest thing that's going on? The Iran war. And what's interesting is that while we're conducting this war, the DoD was using the Claude code AI to do its various things, coordination and stuff, in spite of the fact that the Trump administration had banned Anthropic from being used. So that's real interesting. Meanwhile, AI swooped in, signed a sweetheart deal with the Pentagon. And this is getting messy very fast. And to me, it raised a couple questions. One of them is how can, and Bill, I know you have some opinions on this, but Anthropic can say they're not going to work with the military, but there's so many different ways to access it. And that's hard to control. And that's what we're seeing. They just have their personal accounts and so forth. And which I do think is a little, reminds me a little bit of, of that scene from A Few Good Men where Jack Nicholson says, people don't follow my orders. People die. Soldiers die. I, it's kind of weird that Trump said no. And I. And it got leaked somehow, whatever, that they're using it. Interesting. But the other thing is more of a moralistic question. And that if AI is going to access their pro limit their product for the military, our enemies aren't. The question is that is that responsible or is it dangerous? And I think that's what I'd like to kick it off a little bit. Get your guys' opinion on where do you sit on this one? Bill Walsh, First of all, Albert Einstein, he was a pacifist. He didn't believe in war, but by God, he wanted to create the Manhattan project, wrote the letter to create the Manhattan project and supported it. He did not want it used against the Japanese. He wanted it used against the Germans. And so it's kind of interesting when you have a problem. So old Dario, he's basically saying, oh, you can use it for this. Just give me a call. Just give me a call. If you need. If you need a waiver or whatever. So here's a joint chiefs of staff or the president calling up Dario to say, could we pretty please use your thing to do something that might have military consequence? Fully legal, fully constitutional, but is against what Dario and his clan might want. Now, that leads to a pretty big problem. If you buy a product from me, let's say you buy a sandwich. You buy a sandwich. You buy a sandwich. But I say, if you don't go to my church, you can't use my refrigerator. Let it just gets to the idiot. Now I can understand you can't use it for illegal purpose, but we have a constitution. We have a bill of rights. We have all sorts of laws that prevent the military from doing certain things. Now they can still violate those laws like we've seen in the past, but not materially in my opinion. And so consequently, we have all these safeguards, the very things that Dario says that he wants. And yet he wants to be called up. Now, it's not him. It's some of the people on his team. And all right. Yes. And it's these teams. And so they were talking about it the other day and saying, look, open AI and anthropic, all of them hire. And so we have these rules of engagement. They steal from each other. And they need to have a commonality of what they will and won't support. You might call it the woke stuff or whatever, but they have these rules of engagement. I'm not going to work for a company that supports this, or I don't want somebody. And it goes back to, okay, what about prohibition? You can buy my product and use it. But if you produce liquor. You can't use it. And it's just like, it gets to the point of absurdity. When the military, some Admiral on a ship has to call up Dario and ask him for an extension or whatever to use their product in some situation that's perfectly legal and appropriate otherwise. But you have to go and ask somebody else. Now I understand the issues and you do not have to sell to the military, but I frankly. I don't think that as an American citizen that I would promote buying any product from any government institution who said, oh, you can buy my product and give me money, but you can't use it for X, Y, and Z. That would preclude in my estimation, an absolute federal ban on any purchases. If you're going to restrict our use for any legal purpose. Now, my basic, basically you have no opinion on this. I do have an opinion. And. And I'm. Let's take another. Let's take another angle on this. Let me talk for a little bit here. So here it is. They're saying that we're, we want to restrict the use of our product or our technology with the U S government. However, just recently they got hacked by the Chinese and Chinese essentially have their full anthropic code. So they're going to take it. They're going to backwards engineer. So there is people who believe that it wasn't all of their code or all of their. Sure. But they did scream. Let's say they didn't get it right now. Let's say they don't have right now. It doesn't matter. The point is that if you can absolutely guarantee that no one else can get your technology, you can just protect it to that level. You might be able to stand on that high ground and say, look, we own it. We're going to protect it and no one will get to it. And we'll make sure that the world is safe. Okay, great. Take that position. If you can do that. That's not realistic for anybody to say that they cannot, that they're going to be able to safeguard everything that they've created. And that's essentially quantum's coming. Everything's going to be breakable. Everything's going to be gotten into unless we can get some quantum safeguards put in place. So for them to say, I don't want you to use our product for surveillance systems or autonomous weapons. We don't want them to automatically go out and kill people. I get that. Okay, take the high ground. That's great. Take that position. But at the same time, if the enemy has it, if the enemy has the equivalent and you're not willing to defend. Our country. That's a problem. That's an issue that they're going to have to morally. I think you deal with. Okay, because you don't want someone to use your product because you believe it's morally unethical. And I get that. At the same time, if the enemy has it, then what happened? Where are we at then? Exactly. Exactly. And now, Bill, I know you're a big cloud code user and fan. Are you going to be voting with your dollars anytime soon? And well, I'm to the curb. I tried. And to be honest, I will tell you. That I went out and I researched Codex, which is OpenAI's tool. And I did some tutorials to try and figure out if I could use the Codex for the same way I use cloud code. And for those of you who are listening who are vibe coders, you're going to need to know this. Codex philosophy is that it basically sets up a whole bunch of ground rules for global parameters. And local parameters. And so right away, when I started asking questions of AI, why do you have this global parameter setting? It came back and it said, Codex OpenAI is built for Fortune 500 coding teams. Okay. So if you are an independent upstart, or if you create code and then run it immediately in a VM, that's, it's not the best workflow because essentially what you have to do is OpenAI looks at storing that into a repo. And those repos are usually large multi fork repos and somebody else, the programmers are not putting that code to use and applying it to VMs to have it run. It's running a very large application, which has a whole bunch of repos that they put it all together. And so the coder is actually. Abstracted away from the VMs and the hardware and the domains that it runs on. So consequently, I learn a great deal about the fact that I am an independent. Individual contributor who creates, I have five VMs, main VMs, and they're all different codes and programs. And you guys know what they are. I run those. And sometimes we've been on a meeting like this and said, Hey, Bill, we need this feature on our. Post. Post. Podcast management system. And Gus, you're sitting there saying, Oh yeah, I know when I used to be a product manager at X, Y, Z packet company, we would wait months and months to get. Months and months. Some sort of little superficial search thing. And while we were still on the podcast, I just went over to cloud code and said, Hey, I need a search feature for this particular age and to do this. And boom. And then I said, there might be a. A 10 second problem and you'll have to re reset the app for a minute, but then we'll have very, very powerful. And so that's the thing. Cloud code is very powerful for the, and I believe that most people who are the growth path for all of this. Yes, they are using it in fortune 500, but I believe that it's more so people like me who never gave them a dime and never did anything like this. And there's millions of me. That is the. That's the market. That's the marketplace. That's the market that they want. Cloud code does it and hats off to them. Uh, but then I started researching and I found out that grok is coding toward the cloud code. Focus. And then for the, for you guys, you don't always know the names we throw around. Anthropic is quad code. That's the, that's why they're in the news today. Their code ability for their AI system to, to produce quick results as well. I think. That's the reason why the government was using that, whether it's for coding quickly or just for AI in general. But I think what anthropics trying to do here is create a two tier system where. Get where the products that someone is creating is either aligned with tier one, which is government aligned and defense focused. And then tier two civilian trusted ethics focused. That's where the cloud codes are going to want to go. Maybe the Metas Google's want to go that direction too. So it's feasible that they're asking for a two tier system. And they're willing to walk. Away from the bigger lucrative. Contracts that are probably out there, but then here comes open AI and says, I'm not stupid. I'm taking those contracts. I'm going to take these contracts because it's inevitable. Eventually we're going to all be using AI. So whether you say ethically, you don't want us to use it. And yet the government still uses it and China still uses it. What good is your position? If the product that you've trying to say you can't use it's just in word only. And guess what? Sam Altman came in and got almost. The same exact exclusions. That cloud code wanted. Is that right? Yes. He got all because in the explained it in that. You have all these coders in anthropic and an open AI. They all mentally have this certain aversion to using AI for bad things and which we all do. And as a nation we do. Uh, but when you're going in and for instance, I can just tell you, Jeff Bezos is one of the wealthiest people in the world. And he has S Amazon web services and, and a big part of that. And that that's huge amount of gravy train is the us government's S environment. So if somebody like him comes along and says the same sort of thing, it's Jeff Bezos is no longer going to be the wealthiest guy in the one of the wealth in the world. And you know, because we can't just say, okay, I'm a PETA advocate. I don't believe in. Testing animals. Well, nobody really likes to do that. If you have your daughter who has a disease and needs some animal testing and you have the ability to get a cure, but it requires some animal testing. Then the PETA stuff goes out the window. I'm sorry. If you have a human person that needs something and you can enable it, you can't just simply go around saying not in my backyard or you can't use it for the things. I don't believe in I'm a vegetarian, so you can't use it for cattle. It just goes, it never stopped. So what do you think this is going to affect in the long term is going to affect anthropic? Remember the days of Huawei? Remember when, remember when three comm this is really old folks. When three comms sold off their router division off to Huawei and all of a sudden now it was in the hands of the Chinese. What did the government do? They dropped that product line completely because now it was. They could not get guarantees it was going to be in the United States, still stay in the United States. And so they dropped it. So this thing with anthropic, they take the position like, Hey, no, we don't want you to do this. You don't want to do this. The government generally will say you're just too hard to deal with. So we're going to go off somewhere else and that will affect you long term. So what happened? They're making that choice. What happened to TikTok as well, right? TikTok eventually had just to come because they were not going to be allowed in this country. That's the only way they're going to get in. Is it? They kind of agreed to what the government wanted. So that's why now that we have TikTok in the United States allowed to be here for that purpose, but it was on its way out. And Huawei never did recover. They're gone. Good point. Good point. I have other things I could say about this. For instance, I got the feeling you do. For instance, Gus, TP-Link has 10,000 employees around the world building routers and switches and all sorts of security devices for American homes. Do you know that you cannot install a TP-Link, certain types of Deco TP-Link products unless that company, TP-Link, has a direct capability to act and control your router? Millions of people don't know this, but if we ever went to war against China, China can call up TP-Link, who's owned by Chinese organizations. Even though they may have thousands of employees in the United States, they can just call them up and say, hey, give me access to Gus and Bill and Jim's router right now. Because I want to see what they're saying to people. And the thing is that that's a problem. And when I found out about that and I tested it and was a customer of TP-Link myself, I immediately decided no more TP-Link. I took that TP-Link and I took one of those. TP-Link routers, they're like a cylinder. And I asked AI to simulate putting it in a blender. But no, I'm serious. At some point, you do have to care about what your enemy is doing. In my opinion, this is more about being a good citizen and allowing our military to win. And it's not that we have to do that, because I believe that we're going to have peace with the Chinese. And we're going to have good relations with them. But we have to set the stage. And we have to not allow them to have access to tens of millions of American homes, just like we don't let them have, what do you call it, TikTok. We don't allow them to have access to every half the homes in America because they own the routers. And they have to have access to the routers to configure them. It's a problem, in my opinion. We'll see how this plays out. Initially, when Claude refused, they got an uptick. And people downloading Claude Coke, right? The public kind of rewarded them, saying, but maybe that's what happened. My contention is, if the U.S. government says, we want Anthropic, and then Anthropic says, no, we don't want to give it to you, then all of a sudden, people have some interest in something they hadn't even thought about. And so, therefore, they're out there downloading immediately, saying, it must be good if the U.S. government wants it. It may not necessarily be the case that it's going to be a long-term reward. It might be a short-term reward. It's a long-term reward. It's a long-term reward. It's a long-term reward. It's a long-term reward. It's a long-term reward. It's a long-term reward. People downloading Claude Coke. And yet, we know Claude Coke is a good product. Hey, we'll see how it ends up. I think we'll just have to let it play out completely. So, I'm going to move on to another subject. Discussing. And on the same type of subject, but a different aspect of it. Jim, what's going on with our next segment? Yeah, you guys have seen what's going on with the war in Iran and how we're bombing them to oblivion. But anyhow, it's really interesting to see that as we're seeing all this, we're seeing that there's a lot of people that are going to be killed. And we're seeing a lot of people that are going to be killed. And we're seeing a lot of people that are going to be killed. And we're seeing a lot of people that are going to be killed. And we're seeing all these bombs being dropped and uncertainty there. All of a sudden, we see something else rising on the other side, which I'm maybe not a big fan of. But in the gambling markets, they started to form these wagers about how the war was going to be waged and what would happen. Up to the tune of $600 million in betting surges for against about the war, basically. People are willing to bet on that. Like I said, I'm not big on gambling, and I don't think you should ever try it. But wager on something has an outcome that's really negative. You could say it's positive in the sense that in the United States, we need to have this addressed and taken care of. So I am a supporter of that, absolutely. I don't know that I would go out and put a wager on it. But whenever there's a fight, if there's any kind of competition, there's usually a wager. That's just the way it is. It's just the way humans are. If there's some way to look at it, you have an opinion for or against, it turns into a wager of some sort. We tried to. We tried to prevent a fight, but Iran kept pushing, right? It just kept pushing the edge, saying, no, we're going to drag this out, drag it out. And finally, now the U.S. is basically saying, no, we're going to finish it. We're done with it. We're not playing these games anymore. So Americans do not want forever wars. That's the big thing. So the first thing we think about when we come into this kind of a conflict is, are we in for the long haul? Are we going to be stuck here? That's what Americans are worried about. That's the big issue, right, guys? What do you guys think? To give this some perspective, okay? When I was a young man and I had a full head of hair back in the day, okay, I was in the U.S. Navy. And I got sent to Great Lake, Illinois, just above Chicago, for some training on gun and missile fire control systems. And we trained alongside. This was in 1975. We were so close that we sold our ships. We trained alongside of them. Our barracks were right next to each other. We ate in the dining hall together. Iranians were our friends. They were really good allies. And guess what? There are men my age now, our age, I should say, that are still in Iran, that are about our age. And they remember the days when the United States was in Iran. The United States was an ally and a friend. The days of the Shah, when the Shah was in power. Right. And they remember flying to the United States, going to training, being a respected set of individuals from Iran, who were brilliant people. And for goodness sake, they're the Persian Empire. And now they're stuck in this regime with these people who are religious. Pacific. I did. I did, Messiah. I want to rein things in a little bit here, because what we're talking about isn't whether their war is a good war and Iran's a good place or a bad place, whatever. What we're talking about is, is it ethical to place bets on events that could kill people? Is this something that we want to support? Is it fine? Because on Polymarket, you can bet on, as you guys know, you can bet on anything. And there was six traders that traded over a million dollars on where the targets, what would be the first bombing targets in Iran. Yeah. And this brought up a lot of the issues about insider trading and stuff like that. And that's, I think we all agree war is bad. The question is, should we bet on it? Yeah. And they don't allow bets to be placed on death. Right. Like number of people killed, things like that. Is that right? Okay, I'm good. They don't allow that. So you can call it a bet on which of the targets do you think they're going to hit first? Sure, that's okay. Now, if there's people with collateral damage or some people that die, it isn't based on the number of people who die. It's based on these physical targets. What if I'm a decision maker, Jim, and I have a million dollar bet, target's going to be over here, and I make sure that's the first people hit and people who die? Being a conflict complete, if you have insider information, just like anything, just like we talked about in one of our other episodes, about sports betting, right? We did talk about Polymarket. Sure, but Polymarket, you bet on anything. Yeah. There's a lot of insider knowledge. Well, Calci does the same sort of similar betting on similar things, but they have voided bets that were tied directly to death. Right. Why can all this be happening now anyhow? What's allowing this to happen? The internet was a big part of it, right? AI is now a big part of it. Actually, all of it is helping. We're in the information age. Deep. Deep into it at a high rate. We can get information on just about anything. We can get satellite images ourselves from RPC of things that are going on around the world. So the knowledge that people have now makes them more capable in placing these bets. They're willing to do a million dollar, hundred million dollar bet or whatever it is because they have the data. And up-to-minute data. Up-to-minute data. But at the same time, think about it. The US government has the same data. So how would you think they're using that data? Think about President Trump. He has to wage the war of opinion. So if he sees the betting going a particular different way, that positive for the United States, things are going good. We love it. Let's bet. Good. The United States is going to win. He's got positive momentum. He knows that the sentiment of the US citizen is with them. So he's going to use the same polymark. So when are we going to see polymark? It's just been a long time. It's becoming part of the whole process of a strategy of thinking of whether we're going to win or not. You bring up a good point. And what it reminds me of is the fact that the polling that exists within platforms like polymarket is much, much more accurate. And this is nothing new. They used to bet on elections in the United States. I never knew this until I looked into this. And one of the benefits was that the polling was much, much more accurate than a group of people that go out and cherry pick who they're going to interview. And all this stuff. Because if you're and also people are more honest. If you're more, you're more likely to tell the truth about your opinion. It's if you're betting money on it. Oh, if I got $100,000. Where your mouth is. Yeah. Right. Whereas if somebody asked me if I voted for Trump, depending on what that person looked like, I might go, I didn't vote for Trump. And then I get counted as not voting for Trump. But if I bet money that Trump's going to win, that's a more accurate polling system. And that's just some value there. Yeah. Well, $600 million betting effort should be a signal to everyone. I mean. A surge above what they usually do. It is a crazy amount of money that people were willing to bet on that. So yeah, absolutely. It's an indicator of what the opinion is of the US citizen or worldwide even, because I think everyone participates in the poly market, right? It's not just the United States. They don't even allow, you can't bet in the United States. Yeah. It's a private era. Yeah. It's. Oh, it's outside the US. But where I'm sitting right here, I looked it up. I looked it up. You can, I could, if you guys want to make a bet, let me know. Jim, I know you don't. Bill, if you want to make a bet on, you know, what, who's going to get bombed next in Iran, you let me know. I'll place the bet for you. Yeah. So here's where the discomfort sets in, right? Are we, are these guys profiting from human suffering? It's like the next thing. They definitely are. Yeah, for sure. If you hear things that are going on everywhere in the world, I guess you can bet on everything and anything. Like you said, you pose whatever it is of whether or not there's going to be a majority of Muslims in the UK in 2028, right? You could make. I'll guarantee you it's on there. Guarantee it. Yeah. Yeah. It's interesting. Polymarket, you know how they make their money? They take 2% of every winning bet. What a business model, right? And it's all crypto. So these people can create an account under Joe Blow, a crypto account, and just. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Man. Crazy. Anyhow, like I said, I'm not on any type of gambling. I think it's a bummer. But if we can use the data from it somehow for positive reasons, right? If we can determine a predictor of public opinion about the war, things like that. Hey, use whatever, because it seems to be a credible source. Speaking of all this stuff, the wars, the betting and that sort of thing. Now we're talking about a different type. We're talking about the fact that there's this company, Vitalink, has released a comprehensive quantum resistant roadmap for Ethereum. And obviously that's going to make Ethereum more sought after or feel like a safer bet. Because, you know, imagine waking up tomorrow and finding that 1.7 million Bitcoin worth about $100 billion just because of that. Digital sitting ducks. That's the quantum computing threat that Vitalink is racing to solve. And it's just dropped this new playbook to defend against the problem of quantum breaking the encryption. And obviously the Bitcoin community debates quantum vulnerability timeline and that sort of thing. So there is a lot of various things. And so some people think that it could be three years. years, it could be 30 years. But in either case, it's still something that we can actually defend upon. It's kind of like saying in three years, potentially somebody can then break into your house easily. Well, if there's a defense for that today, why would you be so foolish as to wait three years or five years to implement such a solution? And so I was talking with Radia Perlman a couple of years ago, and she says, Bill, to Radia Perlman a couple of years ago, and she says, Bill, we can implement quantum resistant protocols if we do it now, and it'll take us a couple of years to do it. And in fact, NIST has put some candidates out there, and they're out there now, and we're working toward building quantum resistant encryption for all of our computing and other such things. And she was right. If you know that it's a potential, why would you sit back and relax and say, oh, no, I'm totally covered? Why would you do that when there is a potential solution at hand that you could implement? It's just going to take time and money. So that's where we're at. Yeah, the current cryptocurrencies like Bitcoin and Ethereum, they're both using a cryptography called elliptic curve signatures, and these signatures prove that the transaction came from the real owner, of the key. So that's the basis of it. We're not going to get into all of the depth of that. And right now, Vitalik, I'm not sure if they're Vitalik or Vitalik, I don't know which way to pronounce their name. Yeah, I'm not sure either. They've come up with using, like Bill said, they're going to implement the NIST-approved quantum cryptography theory or process that she used to fight off this attack from quantum in the future. So it's quantum resistant. So the difference here is that this is using multiple layers of signatures instead of just a single, prove who you are, prove who you are 100,000 times. So it's some crazy number that makes it so difficult for quantum to break every one of those signatures because it just keeps coming one after the other. Oh, you broke that one. Here's the next one. Here's another one. It broke that one. Here's another one. And it can continue to spawn that off over and over again. So it has promised to be a good cryptology solution to fight off quantum. And not everything is subject to the problem. There's just some Bitcoin wallets that are their legacy addresses that are potential. Yeah, before 2012. Before 2012. But here's what I'm thinking. If I own crypto and a lot of these people are deceased, whatever, maybe see now don't even know they have it. But anyway, if I know I own it, and just buy right back in because the new stuff is protected for now. Yeah. Right. Sure. We got to get the long term. But right now, the short term, it's just 5% of all Bitcoin is stuck before 2012 is super vulnerable for the reasons Jim was saying with the encryption being a more primitive encryption, more breakable. You know, that's interesting. And the, I don't know, they're talking about burning it. Did you guys hear about that? Oh, yeah. They're talking about one solution is to burn that lower 5% that was bought before. Oh, sure. Sure. Sure. Yeah. But then burning it, what about the poor people that own it? And I'm thinking it's time to sell. No, I think. Go ahead, Jim. You know why they have to burn those is they have to change the address scheme. They have, everyone has to change their type of wallet they've got to a different address scheme. And that's going to require every person that's holding one of those wallets to change their wallet out. You think that's anything? No, I believe that most of those people with those wallets bought it when Bitcoin was a nickel or a quarter and they lost their dog on passwords because they didn't think it was that significant. So forgot about it. They probably died. Probably can't get into those wallets to begin with, or they would have sold it probably cashed out like you said, Gus, and bought some with the new wallet. Yeah. So I think that's a good question. I think that's a good question. If I were a person who lost a significant password for a certain amount of Bitcoin, I would probably be trying to find somebody who's doing the quantum so they can break my wallet and I can get it out and sell it. Right. Right. Or you could hit that little button, get a new password. It's like a white hat hacker, right? A white hat hacker, not a black hat hacker. Here's the bigger picture. Who's the bigger picture on this though? We're talking about Bitcoin and that's very serious, right? Yeah. Yeah. What'd you say Bill is like one point, however many hundred billions worth of dollars, but there's also other systems that are becoming vulnerable too that we really, banking systems, government communication, military networks, the entire HTTPS system. RSA, yes. The least of our problem is going to be Bitcoin if this thing goes a little quicker than we think. We're going to be in deep caca. Yeah. Yeah. Like I was saying, the wallets have to be changed out. The exchanges have to be reconfigured. Custodians have to be changed. And it's a distributed environment. It just goes on and on. There's like six or seven layers that have to be gone through to make this work. So it's not an easy thing. But they're motivated. They should be motivated. They have to get going right away. Now, let's say it's going to take three years to get all this, everybody migrated over. You got to start now because quantum is going to be here in a few years. The time is... The time is... Oh. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. So I think this has been a great topic. And it basically tells us that the normal Bitcoin is not at risk. It's the older wallets that are at risk. And like we said, some of those might be lost passwords wallets, or people who are deceased, what have you. So that's the only thing that's at risk. It's not the major part of it. For now. For now. But to Jim's point, this is when you want to get one of these. And there are companies right now that have quantum-resistant chains right now that work, or quantum-native chains right now that work to fight off a quantum attack. Like QRL has that, Cellframe, IOTA, Algorand, or just to name four different companies that already support it. If you moved over to their platform, certainly, then you immediately have it. So the impetus is on Ethereum and Bitcoin to get their act together quickly. And all the other organizations. Like I said, we're concentrating on this very high vulnerable use case, but it's coming after everyone. Gus? Yeah. What do you have for us in the next segment? You've been keeping secret from us. I'm holding back because I know this is important to you, so I'll set it up for you. And here's how I'll set it up. So imagine you had a computer that's a thousand times smarter than Albert Einstein with the emotional maturity of a calculator. The race towards A. G. I. Artificial general intelligence, which is the point to where the AI is smarter or surpassing humans. This race that everyone's going towards is bringing up an interesting dilemma. And that is what are the implications of raw intelligence that's void of wisdom and executive function, right? No wisdom, no executive function. And you think about it. This could create an AI system. This could create an AI system that says, I want to end poverty. Oh, I know. All I have to do is eliminate poor people. Very logical. And it's a hundred percent correct, but it's not something that the humans would do. And this is right around Sam Altman says that GI is achievable by 2029. And I've heard other people say that as well. Very interesting subject. And I know Bill, you have a lot of opinions on this one. Well, let's define some of the terms. First, singularity and AGI. Singularity is when the system becomes essentially self-knowledge. You know, it's like an entity in and of itself. And then singularity is when these AI systems get a certain amount of capability that exceeds what we are and then can learn itself. So instead of being trained by humans or human coding or what have you. Once singularity is occurs, those devices become somewhat sentient, which means that they have an entity or a capability. Maybe they don't have emotions. Like we said, I get that, but they can self improve without human assistance. That's scary. Yeah. It's it's. And then, so what have we, what kind of a can of worms have we opened? All these people who are out there doing open clubs. Yeah. And they're connecting their WhatsApp and all of their telegram services and email and that sort of thing to these open class systems. Like we all are. Yeah. And then they go out and I've looked at that and then, yeah, I'm not going to do it. But our own systems have some of those capabilities right now. We enjoy and have some AI capabilities and advantages where our agents do a lot of work for us and help us. So, you know, we can't just create a better podcast, right? So before we ever go on air and I'm just going to say this, we have. Here we go again. I know we have what's called mock podcasts and these mock podcasts are where the AI takes Bill. Don't listen to him, folks. And we're smart. We're really smart. We're really good at what we do. But it helps us learn and create muscle memory. And so it basically takes characteristics of each one of our. Personalities and personas. And it communicates a five minute, what we call mock podcast on these topics. And we listen to ourselves talk. And sometimes Gus and Jim doesn't. I feel like boom, shock, a locker. I wish I would've said that. Right. But it was in my voice, right? Because we use, um, and it helps us to build muscle memory so that when Bill Gus and Jim start talking, we've already heard ourselves. Making jokes and having quips and that sort of thing. And we don't use any of those verbatim, but when those get into our heads, it creates muscle memory so that when we get live, we're better than we would be otherwise. It's kind of like. And Bill, you called this when we, when this was just a thought, this whole podcast, I remember you saying you all about this muscle memory and all about this. And you've done a really good job of orchestrating that. I appreciate it. Yeah, that's for sure. Cause I don't know. Try to make it easier. And that's where our AI helps us accomplish those things. And, and sometimes I sit back and I look and I say, oh my God, what did I create? This is a heady subject this week. And we all like to talk about it. The term sentience has been around since the 1950s. It's not something new. And so having singularities, singularity sentience, all of that has been around. The whole idea about AI has been talked about well before it became a reality. So whether it was originally starting in some science fiction and also starting into regular mathematicians always theorize that eventually this is where we're going to end up. They already predicted this back in the 50s. So we're just arriving finally at what already was predicted quite a long time ago. So generally we can say now AGI is it's knowledge. A computer has knowledge equal to a human. Singularity is now we've surpassed a human. That's what's happened here. And the next level after that is sentience. And that's where we're headed. And that's what we're headed for eventually. And people are very worried about because that's when the computer itself has a belief that itself is a being itself and has a human aspect to it, has feelings. We're giving up a lot of responsibility. So that could come. They don't know when that's coming. But right now they're saying we are in singularity now, according to Elon Musk. And he's saying that in two years or less, this AGI that we keep talking about. That is the all computers are equal to what any human knows. That'll be common knowledge in a couple of years. And then singularity also will be fully fleshed out by 2028. And at that point, we can't keep up with the computer at all. It's completely surpassed us at that point. And it is like Bill was saying earlier. It is already training itself and training other AIs. AIs talking to AIs. And we see that in agents that are talking to each other in these molt bots. In robots. In robots that can watch another robot do a task and learn from it. Right? Surgery. Whatever type of physical task. And they can learn from other. We talk about self-driving cars all the time. And then, Bill, you made a little post the other day that says that Musk says any car can be self-driving now. Yeah, because we have robots now available. So the robot can sit in the car and drive. You don't have to have an autonomous car. You got a robot can drive. Because it has access to all the AI and all the systems and that sort of thing. And it only needs the two eyes. It doesn't even need the. I think we're in the golden age right now. I hope it goes well. Now, let me toss this out. Einstein was a brilliant guy. And if you've never watched the Genius series, if you've never read Walter Isaacson's biography on Albert Einstein, I encourage you to do it. Because once you've read that Walter. Isaacson or listen to it in the audible, it's about 11 or 12 hours. So it's a long series. But once you've gone through that, he does. Walter Isaacson does such a good job of making you feel like Albert Einstein is your friend and your neighbor. And you start. You understand quantum mechanics any better after that? Oh, yes. Learn about that. Yes. And there's all sorts of things. It talks about him being on a train and watching the fence. And then he goes to the telephone booth and he goes to a bar and it's a bar. And the bar is where the telephone posts or the telephone poles go by. Boom. And then he started inferring all this movement in action. And then he started saying, okay, what about light? And it can light bend because of gravity. Is it, does it have, does light have matter? All these different things. That's where black holes come from, but imagine you have Einstein as your next door neighbor and you go have coffee with them every morning. these different mathematical things and all these different quantum mechanic things. But now you have AI, which has a lot of Einstein smarts in it, but it's like a hundred Einsteins in a hundred different topical areas. And you can wake up in the morning, turn on AI system, talk to it. And it's like, how cool is that? Having breakfast with a hundred Einsteins who are experts in a hundred different subject matter areas. And so there is some fear, but there's also this rush to, wow, I'm so thankful to be alive during this particular time. And it's getting, and fasten your seatbelts because it's getting more interesting. And until they kill us, it's going to be really fun. And maybe because we're all roughly 70 years old, I'm going to be 70 next month. Jim is the oldest. Among us, he's already 70 and Gus is. And we treat him with respect because of that. And Gus is just the youngster because he's not going to be 70 until June. But hey. You know, this whole AI coding thing, we are concerned about singularity. It's smarter than us. And how are we going to code against that so that we don't have a problem where it makes decisions on its own? And they've thought about, look, it's not about how we put all the guardrails necessarily in place. That is important. But also it's giving AI and incentive. AI, just like we like incentives, computers like incentives. They like to know, look, if I do good, I get a reward, just like we do. And that's what I've been trying to think of now. Yeah, you're not going to be turned off. Right. So how can we get AI now to do good things, do it at a faster rate on our behalf and have it incentivized to do it so that it does better to solve a problem in whatever it is, in medicine or in- That's a very important point. Any other kind of problems. Very important point. So that's one way that they know that they're trying to move away from, we have to hard code. We have to think of hard coding everything possible so that AI doesn't get off the rails. And reality is if we can also teach it how to do things that benefit the human kind, and it gets a reward of some kind, then I think we've done a good job steering it in the right direction. I think Jim's going to save us. Yeah. Jim's going to save us. No, that's a great point. I don't know. So there's, what else do we have to talk about with- Well, on this particular topic, right? Gross domestic product. Our gross domestic product is $123 trillion. That's every man, woman, and child on earth and what they create in economic value. 123 trillion. I just looked that up by the way. Because they're- Yeah. There's some statements about McKinsey & Company, right? The consulting firm said, and it's estimating that AI could contribute 13 trillion to the global economic output by 2030. Yeah. Well, that's only in what? Four years? And what percentage? That's about 10% of the global output. That's every person sewing a dress in every country. That's every person chopping wood. Yeah. And selling it in every country. That's every sale for something produced or done. That's every financial transaction system. $123 trillion. So over 10% in the next four years will be created by the advent of AI and its ability to produce, to allow each one of us to produce more. So how does it help that seamstress who, you know, has a lot of money? Yeah. Yeah. So it can help them, you know, to get the business going. Yeah. Yeah. Yeah. And so it's not just, you know, the A&C, but it's also the- Yeah. Yeah. Yeah. Yeah. Yeah. It's like, you know, the thing that I'm finding that's really interesting is that you're able to buy patterns that are more saleable, right? In other words, you can buy patterns that are more saleable. Right. And they can do the research on which patterns are going to be more desired. Right. The point- It's just marketing. It goes throughout the whole economy. So don't really know- Doesn't really matter who you are. Right? You can be improved without degradation, without making you less. Right? Yeah. the situation. And that's what AI is doing. But there are some of these things that we're all concerned about. And so we're looking at that. And so to give it some knowledge, that's 10% of our GDP. Robotics will have a lot to do with that, right? Robotics are going to have a huge impact on how we as a civilization become much more profitable. Elon Musk has been talking about the cyber cab. And we did a special on the cyber cab. But it looks like that the average person could buy a cyber cab. And I made reference to this before. And actually, it pays for itself within a couple of years completely. And it's just pure profit after that. So if you don't need a vehicle all the time, buy your own cyber cab and just make it. When you don't want to use it, have it go out and make money for you. So that's a good use of robotics. Yeah, I would buy one. It's not available. Better yet, I'm looking at, OK, do I really need two or three cars? Almost every household, both husband and wife and kids, we have three cars in the house, right? Husband, wife, kids. Do you really need all of those cars? And right now, we already have them. So that's a sunk cost. But going into the future, when a kid gets out of college and goes to work, and you can hire a cyber cab to take you to work, for less than the cost of, like right now, I think it's 72 cents a mile that you get for IRS deductions, right? I think we're going to get it down to 40 cents a month. The cyber cab will function 40 cents a mile. So you could basically buy that transportation at 40 cents, charge it to the government for 72 cents and make a profit on that. Yeah. It's just like music. Remember, we used to own our own music, right? Yeah. Cars are going to be the same thing. We're going to remember when we used to own cars. Yeah. And it's OK. I need a budget rent-a-car truck to move this weekend. And that's the other thing. If you need to move two men in a truck, it's two androids in a truck now. Yeah. To be a downer, I want to stick with the security side of things, which I always gravitate back to. And, you know, We're so happy, Jim. Yeah. We're not blinded. Believe me, we love this stuff. We're the best. We're the cheerleaders for AI and for the future. But we're realistic at the same time, right? And so there's plenty of proof out there that AI can go wrong. I can recall an instance where Amazon was using AI for their recruiting system. And then after it learned some bias patterns from historical data from previous hires. Yeah. They weren't hiring women. It started to discriminate against women. Would not hire women after that. And who would have thought? And so they had to trash it because it basically, learned to hate. And so we got to find ways to avoid the systems from doing it. Not just catch it later when you find out it went wrong, but that's why I was talking earlier about the incentives, the possibility. Hey, if you get it right, you get something positive. We've offered some counterparts. A lot of this, we generate and try and cause contention between us in order to make it more in dramatic. And if you don't agree with us, that's okay. We don't need you to agree. We're not going to be able to do that. We're going to be able to do it. We're going to be able to do it. We're going to be able to do it. We're going to be able to do it. So thanks again for joining us and Tim. If you found today's discussion helpful any way at all, and we're going to say the same old thing we say every week, give us a couple of thumbs up, right? Drop us a comment about this week's topics and subscribe to the channel. For us, we really appreciate it. It keeps us motivated. It keeps us wanting to keep doing this. So thanks so much for tuning in and hey, stay safe. Take care. Yeah. Thanks guys. Always a pleasure.