AI Expert Panel: America’s AI Plan, the End of Google Search & the Next ChatGPT (Transcript) – The Singju Post


The Singju Post
The Singju Post
Read the full transcript of host Peter Diamandis of Moonshots podcast in conversation with the guests: Dave Blundin, founder of Link Ventures and Alexander Wissner-Gross, computer scientist and investor, on “AI Expert Panel: America’s AI Plan, the End of Google Search & the Next ChatGPT”, July 29, 2025.

Introduction and Guest Introductions

PETER DIAMANDIS: Everybody, welcome to Moonshots, another episode of WTF. I’m here with my moonshot mate, Dave Blundin, the head of Link Exponential Ventures, an extraordinary visionary. And with a new moonshot mate, Alex Wissner-Gross. Our friend Salim Ismail is driving his son to some summer camp because that’s what you do when it hits July and August.
Dave, good morning.
DAVE BLUNDIN: Good morning. Oh my god. I’m so excited to talk to Alex this morning.
PETER DIAMANDIS: Well, listen. I mean, I know how awesome Alex is because I get a chance to hang with him in Boston, but most of the folks viewing don’t. Would you take a second and tell everybody watching about Alex?
DAVE BLUNDIN: Yeah, absolutely. So Alex and I have been chatting for years now about everything going on in AI. You know, I meet so many MIT and Harvard people, but you rarely meet these crazy true high geniuses.
So Alex, he got three degrees from MIT in four years, and he got math, physics, computer science, the three hardest degrees you can get, and then MIT banned it right after that because it’s just too stupid that any human being would ever put themselves through that. Peter, you would know. I mean, you’ve seen that before. Crazy, crazy productive.
But also, Alex was one of those child geniuses who worked with nanotechnology when he was like eighteen. If you want to check out Bloomberg, you can read articles about him as a teenager. Then he went to Harvard, got his PhD in physics, and he’s been studying and reading every single detail of what’s going on in AI.
So just an absolutely wild brainstorming partner, but super, super fun to talk to. We meet every Monday. I can barely keep up with everything that he’s studying. It’s just so fun. Alex, welcome to the pod.
ALEX WISSNER-GROSS: Thanks, Peter. Thanks, Dave. Amazing intro.
PETER DIAMANDIS: Well deserved. Yeah. And I love our conversations and excited to have you share your brilliance with everybody viewing. So get ready, guys. We’re going on an extraordinary trip.

Today’s Hot Topics

Today’s episode is chock full with hot topics. We’re going to be talking about the AI wars, who’s winning, and it is really a battle galore. We’ll be diving into America’s AI action plan and what it means. The browser wars are back. This is not Bing versus Google. This is Google really fighting for dominance across all the LLMs. Of course, China versus the USA and a special peek at “I’ll take fries with that model Y,” and you’ll see what that means in a moment.
So today, I am sporting my exponential mug here and my gratitude mug. One’s got water. One’s got coffee, but that’s the mindset I’m in.

Looking Back: AT&T’s 1993 Vision

Speaking of mindset, I’m going to take you guys back to the year 1993, and there are these ads that are playing that back then looked like science fiction. I want to just take a second to just sort of acknowledge how far we’ve come. So listen up. This is AT&T 1993, the “You Will” ad campaign.
[Audio clip plays]
“Have you ever opened doors with the sound of your voice? Carried your medical history in your wallet? Or attended a meeting in your bare feet? You will. And the company that’ll bring it to you, AT&T.”
Well, kind of close, but the company that brought us a lot of that stuff wasn’t AT&T.
DAVE BLUNDIN: Oh my god. Do you remember those? I do. Actually, it was so inspiring at the time and nobody believed any of that stuff would actually happen at the time. Just a few years later, it all becomes reality. But you’re exactly right. It’s like, yeah, it came from a whole bunch of startups and a whole bunch of companies you never would’ve thought of. Google didn’t even exist back then.
ALEX WISSNER-GROSS: I’ll take the contrarian point there. I would argue not only did the future materialize for the most part, but AT&T, which, you know, think back now thirty plus years, AT&T was, for a brief period of time, based on my research, the most valuable company in the world in 1993.
So there’s a certain sense in which, as largest market cap company, you can see the future. And I would argue that, in fact, AT&T as you remember when the iPhone launched, as the sole, at least in the US, carrier launch partner of the iPhone, AT&T was, in the end, a key enabler of the future to the extent that most of the tech demonstrated in “You Will” was video conferencing or substantially similar to video conferencing. They did at least help enable that.
PETER DIAMANDIS: Actually I agree. I remember I bought one of their first video conferencing calls, and what it was was a telephone with a screen that would refresh an image every five seconds. I bought one for my folks, and I was living in California, and they were in Florida.
But so the question is, what’s it going to be like in thirty years from now? And I think, you know, given the speed and, you know, Ray Kurzweil was on my stage at the Abundance Summit, and he’ll be back again. And he said, “Listen. Between 2025 and 2035, the next decade, we’re going to see as much progress as we saw between 1925 and 1935.”

The Constant Turnover of Leadership

ALEX WISSNER-GROSS: I think one of the things that makes the American economy so incredibly strong is the constant turnover of leadership. And, you know, if you look at the magnificent seven today and you say how many of those companies existed back when this ad was made, you know, it’s basically none. Well, Microsoft was still around.
PETER DIAMANDIS: 1993, Microsoft was little.
ALEX WISSNER-GROSS: Microsoft was existing. Right. And Exxon and Walmart, they were all trading places throughout ’93.
PETER DIAMANDIS: So, you say 2030, well, everything is, the pace of change is always accelerating. So 2030, you know, if you look at 1993, you’d say thirty years in the future. If you say from today to 2030 is about the same amount of likely change. So I suspect there’ll be a whole bunch of new names that are in the trillion dollar valuation camp that we’re not thinking of right now and that the challenge is to identify them.
ALEX WISSNER-GROSS: Kind of predict which ones will.

The AI Wars Begin

PETER DIAMANDIS: What’s pushing this right now is the AI wars. So not the Clone Wars. If you’re a Star Wars fan, it’s the AI wars here in today.
Just take a second and give a quick overview on what’s going on because it is everybody’s trying to outdo everybody else. Here we go.

# Elon’s GPU Targets

Taking a quick look. First off, of course, Elon with xAI and his GPU targets. Here are the numbers. Colossus launched in July of ’24. Hard to believe it was a year ago. Right? With 100,000 H100s. They doubled it to 200,000 in three months. Colossus 2 is getting ready for launch with 550 GB200s, but the equivalent of 5.5 million H100s. And his goal is 50 million H100 equivalents within the next five years. I mean, that’s insane.
Alex, I think it’s so incredible. If my arithmetic is correct in today’s GPU dollars, 50 million H100s is the trillion dollar AI supercluster that people have talked about for the past few years. Obviously, there will be some deflation of cost, so maybe it’s only a few hundred billion dollars instead of a trillion dollars. But in today’s GPU dollars, that’s a substantial fraction of the US economy.
ALEX WISSNER-GROSS: For sure. Alex and I had a meeting earlier this week with our new secretary of commerce coming into the state of Mass, and he put together a brilliant eighteen point plan on how to be a dominant state or a successful state in the world of AI. I suspect none of it resonated, but it will. After these numbers come true on this chart, it’ll all come back to Alex.
But yeah, I did a little bit of research on this, though, that each of those GB200s has actually two Blackwell GPUs. So you’ll see the production of GPUs, but you have to divide by two, because each one of these super chips uses two of them. There is an acute shortage. Everybody’s going to want those chips. They already do, which is why NVIDIA’s worth so much.
But when you see all these numbers of GPUs, remember, if you’re using the Blackwells, the GB200s, you have to divide by two. And so, you know, the 10x performance improvement that you see on this chart between H100 and GB200, it’s a little bit overstated. Actually, it’s about a 4x raw compute performance increase. But then the NVLink networking is much, much more efficient. And there’s a bunch of other innovations. So you do effectively get 10x the AI out the other side.

# OpenAI’s Response

PETER DIAMANDIS: So the question is, how much of this is the battle between Elon and Sam that’s driving this. So check this out. Of course, OpenAI’s goals have to outdo Elon’s goals, and this is a tweet from Sam Altman. He says, “We will cross well over one million GPUs brought online by the end of this year. Amazing. Very proud of the team, but now they need to figure out how to get to 100x that.”
So, I mean, it’s an extraordinary battle. Thoughts?
ALEX WISSNER-GROSS: I think a lot of, to the extent that much of the compute demand is inference demand, call it a half to two thirds of compute ends up being allocated to inference as opposed to training or pretraining, a lot of this potentially is going to be unlocked by new use cases. The CapEx requirements, the OpEx requirements are ultimately going to be driven by new use cases.
If we can solve all the world’s problems, if we can cure every disease, that’s going to unlock an enormous CapEx budget that can then be invested back into these GPUs. So I think the elephant in the room here is can we solve math, solve physics, solve medicine, and through those solutions, reinvest the gains directly into these compute budgets to unlock millions of GPUs and hundreds of basis points of GDP being allocated to GPUs.
PETER DIAMANDIS: I want to dive into that with you, Alex, a little bit later. What does it mean to solve math, solve physics, solve medicine? Because I think this is part of the extraordinary future that I wake up just vibrating about every morning and which most people don’t yet fully grok. But let’s get into that in a little bit.

Meta’s Manhattan-Sized Data Center

So, again, it’s xAI versus OpenAI. But Meta is not going to be left out of the picture. I love this. So Meta is building a Manhattan sized data center. So Prometheus is a multi-gigawatt data center. You know, it’s interesting that we’re now measuring data centers in terms of power instead of necessarily chips. Right? So it’s like a one to five gigawatt data center.
Even more interesting, to get to speed, they’re deploying in hurricane proof tents, which is insane. And then they’ve got their plans for their second mega cluster, Hyperion, in the works.
So, Dave, what do you make of all this? Just all out AI wars.
DAVE BLUNDIN: Yeah. There’s no precedent in history for what’s about to happen. You know, everyone always compares it to the internet explosion of growth, but this is so much bigger, and it’s unbounded, you know?
You can see, there’s no such thing as a hurricane proof tent, obviously, but the race to get these things up, I mean, seriously? I’ve been in tents before. I don’t care what you make it out of. There’s no way. But the race to get these things up and running is so acute, so fast, you know, and it’s hard to get the power.
And so you put the chips where the power is, and any structure is good. I’m sure they’ll build around the tents eventually. But yeah, the scale is basically only constrained by how many chips can I get, how do I get them wired together? You know, we’re finding the power. It’s actually depriving other manufacturing operations of that same power supply, but nobody cares because this is so much more important of a use case.
So the race, you know, we’re still in the first or second inning, but man, is it going to get exciting over the next year or two? It’s already exciting.
PETER DIAMANDIS: Alex, what do you make of this?
ALEX WISSNER-GROSS: It’s all about the latency of construction. If we had years to build these data centers, we would probably use normal construction methods.

The Future of Data Centers and Computing Infrastructure

PETER DIAMANDIS: But if the data centers have to be erected on a very short time scale, materially less than a year, tens are the way to go. And I think to the extent that there is further acceleration in the construction space, I think optimistically, this points the direction of totally new form factors for data centers.
One could imagine ocean based data centers or space based data centers, and maybe even new modular data centers, data centers on wheels to the extent that electric vehicles with GPUs ultimately become distributed data centers. I think potentially we find ourselves in the near future where data centers occupy all sorts of environmental and geographic niches that otherwise would be insane. I love that.
What’s interesting is those GB200s are sixty thousand dollars a pop. And so you have to be really careful that you don’t put them on a set of wheels and it falls off a cliff or something like that. The idea that Elon’s spoken about is having all of the Tesla’s being deployable energy sources. Right? So you can imagine having them, having GPU capability on board as well and just millions, tens of millions of Teslas computing and deploying energy around the country.
You know, one of the reasons that’s such a brilliant idea is because the data centers need absolutely continuous power. You can’t just turn them off at night because the chips are so expensive. They need to run full throttle all the time. And so all that Tesla power that’s buffered up in the cars, if you have a wind or solar outage and you need power right away, buffering it in the cars actually makes a ton of sense. But look at the size of this.
I mean, if you’re listening, what I’m showing you here is an image of the Prometheus data center, actually Hyperion mapped over Manhattan. It’s the size of Manhattan. I mean, we’re basically on the very, very first baby step towards Computronium where everything’s turned into compute. Alex, what do think about that?
ALEXANDER WISSNER-GROSS: I think that’s the multitrillion dollar question. So do we find ourselves in the near future where due to what one might call horizontal exponentiation as opposed to the vertical exponentiation of Moore’s Law, where we have to disassemble our solar system, because we need the compute? Or do we discover algorithmic advances or physics advances that make the entire notion as laughable as the fears from, say, the early twentieth century that horses would overrun Manhattan?
Maybe this is the naivety of twenty twenty five. Maybe we have achieved such advances in physics and algorithms that the idea that we need to tile the Earth’s surface with computronium AI data centers is laughable and quaint, in twenty years. Well, we’re going to find out.

Meta’s Superintelligence Team and Talent Wars

PETER DIAMANDIS: So Meta’s in the game, in a number of different ways. We’ll see that. And here’s one of the other ways that Meta’s in the game. And I just took a moment to lay this out because I think it’s fascinating. So Meta’s superintelligence team has an extraordinary makeup.
Let’s take a look here. First and foremost, half of the team is from China. Right? So these are Chinese Americans or Chinese immigrants that have been hired by Meta. Forty percent of Meta’s team are from OpenAI, I should say, poached from OpenAI, twenty percent from DeepMind, and, of course, fifteen percent from the recent acquisition of Scale.
Seventy five percent have PhDs. And I’ll just make one note that each of these members of the team are likely getting a salary of between ten million to a hundred million dollars per year at the lower end. So what do you make of this makeup, Alex?
ALEXANDER WISSNER-GROSS: I think the headline is the bottom line here about the compensation. I think that this is a preview of post scarce individual economics, where individuals who are sitting on top of enormous amounts of per capita compute have compensation equivalents that are in the hundreds of millions.
This is a preview optimistically of what everyone on Earth, post abundance, post artificial superintelligence, could be seeing.
PETER DIAMANDIS: Peter, it’s so rare you get to live world history. It’s so cool. But remember we met with Mark Chen? That was about a month ago at OpenAI.
DAVE BLUNDIN: Yes.
PETER DIAMANDIS: Yeah. So he’s the head of research at OpenAI, MIT alum, awesome, super low key, very nice guy. He spent the whole day with us. Then it was, I think just a couple days later, he got that call from Mark Zuckerberg saying, “Hey, what would it take to bring you over to Meta? A hundred million, a billion?”
And he said, “I’m going to say no to a billion dollars.” And then that made the Wall Street Journal cover just right after. How do you do that? I mean, first of all, your EA, your assistant says, “Oh, by the way, Mark Zuckerberg’s on the phone” and you’re like, “Oh boy. How high is he going to go now?”
But Mark Chen in that call with Zuck was the guy that triggered this whole wave because, you know, first of all, said no to the billion dollars, but then he said, “You really ought to be investing more in brilliant human capital. You’re going to spend, I guess on the other slide, said seventy five billion imminently on the chips, on the CapEx. You know, a good algorithmicist can get two, three, four x more value out of that investment. Put some money behind the people here.” And so I think that trend is going to continue.
I have a question for you, Alex. How long is individual human talent going to be recognized and paid at this level? Because, I mean, as we head towards, you know, digital superintelligence, isn’t the human in this specific role going to be obsoleted, or is this kind of salary level and this kind of commitment to human capital going to continue for decades?
ALEXANDER WISSNER-GROSS: Well, you’re asking the accelerationist here. I think humans, many of them are going to choose to merge with the machines. So I think it’s almost a trick question. I think this is a preview of near post scarce economics.
PETER DIAMANDIS: So by the way, everybody, just so you understand, Alex Wissner-Gross is definitely an accelerationist, and we’re going to speak about timelines that are shockingly fast compared to what others would say. Let’s put it that way. So but I still believe we’re going to are we going to see salaries dropping off or continuing to rise? What’s your bet on this, Dave?
DAVE BLUNDIN: Well, it’s continuing to rise, but I think most people are thinking, “Well, what does that mean for me?” Because it’s really weird. Right? You’re seeing some sets of people get insane comp packages while other people are getting laid off.
And so the storyline is turbulence, not, you know, there’s going to be incredible abundance, so there’s plenty of value to go around, but it’s not going to land where you would normally expect, and you have to really, really think through, “Okay, what’s rising and what’s falling?”
Sam Altman is always saying this on stage, “Be nimble, rethink your life, rethink it every month in light of what’s happening in AI and don’t get stagnant.” You know, focus, stop listening to mainstream media, wasting your time on the next whatever, focus on what’s happening right here and use it to remap your life. So yeah, no, it’s going to go up of course. But where is it going up and why? What type of jobs, what type of people, what type of roles?

Building Resilient Teams in the AI Era

PETER DIAMANDIS: You know, Dave, one of the things that you state when you’re searching to invest in teams through Link XPV is you’re looking for best friends. And, you know, again, for those who don’t know, Link X Perpetual Ventures is billion dollar plus firm out of, based in Cambridge that invests eighty percent in MIT and Harvard teams at the beginning. And when you’re looking for the team makeup, can you explain that once again?
DAVE BLUNDIN: Yeah, that’s a great question in the context of this slide. We’re looking for teams of super, super tight knit. They have to pass what we call the Fred Wilson test. Fred Wilson is the most successful venture capitalist of all time, founder of Union Square Ventures, awesome guy to study by himself. But he has this three part rule.
He only invests in teams of three or more best friends who “write the code themselves,” so all three are equally capable of doing each other’s jobs, so they’re replaceable amongst each other, and he trusts them. And if they pass those three filters, you invest, even if it’s a bad business plan.
Because the business plans in the age of AI can change in a moment, but the team dynamics won’t. But now it’s more important than ever that they’re best friends, because sooner or later Google’s going to call or Meta’s going to call and try and take one of the three away. And if that person cracks and defects, then your company unwinds. We’ve literally never had a loss on a deal that comes out of MIT Harvard Northeastern, except in a scenario where somebody defects. Other than that, they always succeed in the end, and so that’s the core theme that we’re looking for.
PETER DIAMANDIS: I want to nail that down for all the founders listening. At the end of the day, you really want to partner with best friends. You want to partner with people you’ve known for a while who are going to stick by your side. And I put this down here for that or ask you that question for that very reason.
Here’s the other question. You know, for a number of decades, we’ve had this immigration issue where, you know, people get their PhD at MIT, or Harvard, or, you know, Caltech, whatever the case might be, and then they’re forced to go home. They’re sort of basically kicked out of the country instead of, like, stapling a green card to every diploma out there. And when I see numbers like this, like fifty percent have origins of China, I’m like, we want those people to stay in America, become Americans, and stay inside of our ecosystem.
DAVE BLUNDIN: Yeah, well lately, you know, the government has done an amazing job of getting out of the way recently. And one of the areas that we’ve gotten tremendous help is these O-1 visas. You know, O-1 visas used to be for celebrities, sports stars, you know, “Hey, the Red Sox want this incredible pitcher from Japan, get him an O-1 visa, because the season starts in just a few weeks.”
So now they’ve extended that to apply to AI experts. Like, “We desperately need this person from Romania, from China, from India. They’re brilliant, and we need them get them an O-1 visa.” So if they have the credentials to be a superstar of AI, they can actually stay in the country immediately on an O-1.
PETER DIAMANDIS: Yeah. Incredible.

Polymarket Predictions and AI Model Competition

PETER DIAMANDIS: So, Alex, thank you for these numbers. For those of you, this is Polymarket predictions for the best AI models at the end of this month and at the end or end of August and the end of the year. Alex, can you take a second, explain Polymarkets, and do you trust their prediction engine?
ALEXANDER WISSNER-GROSS: Oh, it’s such an interesting question because this particular Polymarket prediction is based on language model arena type masses of people having conversations with text based language models, with image based language models. And so my understanding of what it’s actually measuring is whether the “average person” interacting with a frontier model prefers that interaction or not.
And the problem with that is that as we achieve super intelligence, many of the frontier capabilities are not necessarily benchmarkable through conversations with the average population. So I think if anything, we’re starting to see many of these, call them community based benchmarks start to recede in terms of their predictive power, and it’s left increasingly to specialist benchmarks that measure exceptional abilities.
For example, open problems, the ability for AI models, hopefully in the near future to be able to solve outstanding open problems in science and math and engineering and in other disciplines that nonetheless can be verified. I think those will be far more predictive in the next few months and the next two years than conversations with the general population, which is what you see reflected in these predictions where it’s the same top four or five labs constantly switching places.
PETER DIAMANDIS: Yeah. And so just to spell this one out, by the end of August, the market is saying OpenAI will be in the lead with fifty four percent over Google with forty one percent. And I think that’s basically the telltale sign of GPT-5 coming online. But not to be outdone, by the end of the year, Google’s placed at forty five percent likely to be the lead over OpenAI at thirty one percent.
And I’ve seen a lot of data that basically says, you know, Google is a little bit slower in the race, but will dominate over everything else given their strength. How do you feel about that, Alex? Do you sort of view Google as the ultimate winner here, or how do you place them against Meta and OpenAI?
ALEXANDER WISSNER-GROSS: And that’s I think we want to live in a near future where there are lots of there’s lots of competition, and there are lots of competing frontier models. And also, it’s a heterogeneous ecosystem where we have open models, closed models, open weights, closed weights, APIs, non APIs, edge based inference, data center based compute.
I think we want that really rich heterogeneous jungle and ecosystem of vendors. So big fan of competition here. Would like to live in that near term future. But I think there’s also, buried beneath these numbers, the big headline that we just zoomed right past the Turing test.
PETER DIAMANDIS: Oh, yeah. For sure.
ALEXANDER WISSNER-GROSS: Essentially, this Polymarket is measuring, and it barely, it barely got any attention. Here we are debating who’s going to have the best post Turing model. The fact that anyone has a post Turing model is utterly remarkable.
PETER DIAMANDIS: Well, I think we’re going to whiz right past, you know, AGI or some version of AGI definition and past digital superintelligence.

The Singularity Debate and Current AI Landscape

PETER DIAMANDIS: Just going to be looking at what’s next, what’s next. Oh, by the way, oh my god, we’ve got superintelligence in our pocket. Charlie Strauss and Accelerando, which by the way, one of the best novels I think ever has, without spoiling it, has an amazing scene where you have uploaded humans talking to each other on a star wisp, going to another star, debating when the singularity is going to happen.
I think that that’s the world we find ourselves in today. Debating, is it going to be frontier lab A, or B, that achieves slightly better post Turing test benchmarks? We pass the Turing test.
DAVE BLUNDIN: Yeah. By the way, that book Accelerando is pretty entertaining too, but we make all of our partners read it so that they can keep up with Alex. You know, the terminology alone is worth the investment. Read it.
PETER DIAMANDIS: It was fascinating. And I’ve got to read it again so I can fully grok what it has to say. But we’re in this – we’re in a every single week, every month, there is a set of new developments that are just pushing the limitations.

Anthropic’s Valuation and Market Position

We haven’t spoken about Anthropic at all. And so the headline here is investors value Anthropic at a hundred billion dollars. Its revenue surged from three billion to four billion in just a month, and Claude is generating two hundred million dollars with sixty percent margins for their coding. Still considered one of the best coding engines out there.
Interestingly enough, of course, Anthropic has been teamed up with Amazon, and there’s been a lot of talk about does Apple buy Anthropic. Dave, what are you seeing here?
DAVE BLUNDIN: Well, I use it every day, and it does write the best code. There’s a talent war going on, though. Actually, Kush Bhavaria, one of our partners, wrote a really cool little memo, open memo, anyone can read it, mostly targeted at MIT saying, hey, if you look at the Golden State Warriors and Steph Curry and the dream team factor, Meta just did that. Look at all the people they just hired.
They have the dream team, and you can buy, you can invest in their poly market or Kalshi at like four cents on the dollar right now. I told them don’t give investment advice. He did it anyway. They’re accumulating the talent and spending whatever it takes.
So then Anthropic used to accumulate that talent because they’re so conscious of the safety side of this. Dario is kind of the thought leader on mechanistic interpretability and AI safety, and that attracted a lot of great AI talent. But now that same group of people is getting these hundred million dollar signing bonus offers. So we’ll see if they can keep up with that kind of pressure. But you know, as of right now, it’s the best coding.

The Future of Frontier Models

PETER DIAMANDIS: I have a question for both of you. You know, we keep on hearing about the same names over and over again in terms of the frontier models. Will there ever be another frontier model that starts that we haven’t heard of yet?
DAVE BLUNDIN: There definitely will, because Alex and I are working on it. All right. Secretly. No, I think what a lot of people don’t know is the algorithmic improvements are factors of ten to a thousand. And so everyone thinks that because these guys have massive valuations and massive budgets, they’re going to run away with everything.
And in the short term that’s true, but in the long term if you do come across a hundred or a thousand x performance improvement, if you have the willpower to turn it into another foundation model company, then it can succeed. But then the big wild card is quantum computing. There you’re looking at just a huge step function opportunity. So I would say more likely than not, there’ll be one or two more that get in the race.
PETER DIAMANDIS: Alex, what are your thoughts on that?
ALEXANDER WISSNER-GROSS: Yeah, I think the open, so for Anthropic specifically, I mean, A, I have a number of friends and former classmates on the founding team and very excited for their success. I think the valuation of Anthropic is perhaps reflective of software engineering being automated. This software engineering being the first major labor category to succumb, in whole or in part to AI automation. I expect this, the same pattern to play out through other highly productive labor categories.
But I think it’s also, it also points the way to eval driven revolutions. I made the point in an essay a number of years ago that the key to overcoming grand challenges in AI is having evaluations, benchmarks combined with data sets. And the stated institutional focus at Anthropic right now is focused on software engineering almost entirely.
And so I think anyone aspiring to build the next great frontier lab, the next great frontier model arguably should be laser focused on what are evals, what are benchmarks that I can focus on that no one else is paying attention to, and then working backwards from those evals.
DAVE BLUNDIN: Totally right. And, you know, if you look at the leaders of the top AI, foundation model companies, Dario is the one and only who is a AI research pioneer from day one. Everyone else moved sideways to get into their current role, you know, whether it’s Elon Musk or Sam Altman or Mark Zuckerberg.
And so he has this much more innate understanding of the self improvement process, because he doesn’t want to dominate coding, because coding is like, you know, much better than mid journey and jokes and videos. He wants to win in coding because of the self improving AI process. And there’s some great research that Alex found this week on a scaling law for how self improvement is going to unfold. I don’t think we’re going to roll it out today, because we’re still evaluating whether it’s real or not.
But he is in the middle of that phrase. So if he wins this race, it’s because of that intuition he has about using winning the code war, then have the coding engine run all night long, every day, self improving the algorithms themselves, and get that loop started. Insane.

GPT-5 Rumors and Model Capabilities

PETER DIAMANDIS: All right. Here’s an interesting note. This is a set of tweets that comes out that GPT’s o3 now runs on GPT-5, that all o3, i.e., GPT-4 turbo requests are now being routed to internal model called Zenith, which is GPT-5. So, Alex, what do you make of this? Do you believe it? And what are you seeing?
ALEXANDER WISSNER-GROSS: Part of me wants to say, based on very, very publicized rumors, we’ll probably know the ground truth in a few weeks or the next couple of months. But anecdotally, I’m constantly peppering o3, one of my favorite models, with challenging to humans math and physics problems.
And anecdotally, I’ve seen over the past few days, o3, or what presents as o3, become able to solve challenging mathematical physics problems that literally a few days earlier, it was not able to solve. So one person’s anecdote, it does suggest that o3 perhaps at the back end has had some sort of capability leap, and that’s very exciting.
PETER DIAMANDIS: So they’re just test driving?
ALEXANDER WISSNER-GROSS: Could be. I think we’ll again, we’ll probably know the ground truth, hopefully, in a few weeks or next couple of months, but seeing under the hood capability leaps is incredibly exciting.

Benchmarks and Model Performance

PETER DIAMANDIS: All right. Let’s talk about benchmarks. And, Alex, I’m going to ask you to lead this. So on this chart here, we’re seeing a couple of benchmarks. Of course, humanity’s last exam and the AIMEE 25 benchmark comparing GAROC-4 heavy, Gemini 2.5 pro, the o3 pro, GPT-5, and GPT-5 reasoning.
And the question, of course, is where are these numbers for GPT-5 coming from? Are they validated? And do we believe them? But if we do, GPT-5 is going to outrace everybody. So take me through this, Alex, if you would.
ALEXANDER WISSNER-GROSS: Sure. Well, I want to draw inspiration maybe from the spirit of Ray Kurzweil who famously mentioned that when the Human Genome Project was one percent completed, that it was actually fifty percent time wise completed. I think that the same notion likely applies here, whereas some might say, okay. Look at humanity’s last exam. Well, we’re approaching fifty percent on some of these purported strongest, but unreleased models.
I look at that, and then I look at, say, models that are achieving only twenty percent on humanity’s last exam, and I say we’re most of the way towards fully saturating these benchmarks. I think the real story here, both on this slide and also the next slide. So on this slide, we have Humanity’s last exam, and this year’s Amy, the Olympiad exam. And then, I think on if we can jump around.
So Google proof question answering and SWE bench, software engineering bench, these are all saturating. We’re running out of benchmarks. We’re running out of evals that test frontier capabilities. That, I think, in my mind, is the headline story. We need really hard evals now.
And going back also to the point about LM Arena, and the poly market, we need much, much harder benchmarks to understand and to differentiate between frontier capabilities, and the industry, I think, is essentially starved of those frontier benchmarks.

The Need for Abundance Benchmarks

PETER DIAMANDIS: Well, the other thing we need so badly is brilliant people like you to tie this into human good. If you said, “When I get to HLE fifty, sixty, seventy five, what does that mean for solving all disease? What does that mean for discovering new physics? What does that mean?” Because it directly connects, and we know intuitively it directly connects, but it’s really mentally challenging to say, okay, what’s the timeline? How does it roll out? All the stuff that takes it from just a test to the real world, that’s just a great use of mind power.
ALEXANDER WISSNER-GROSS: Totally agree. And I just maybe a quick thought on that. I would argue that what we’re sorely missing as an industry and as a research community are benchmarks that measure the ability of frontier models to solve open problems. This what I when I see all of these charts, including, you know, pulling out the purported GPT-5 benchmarks, I see saturating benchmarks. I would like to see benchmarks that address open challenges, and I think those are the next frontier.
PETER DIAMANDIS: Yeah. I love that. Two thoughts here. The first is there’s going to be a point at which these models are solving problems that we can’t even understand, and it’s impossible for us to create the benchmarks in terms of closed benchmarks. But I agree with you, Dave. I mean, you know, why aren’t our benchmarks like which is the model helping us, you know, double human lifespan? Which is the model helping us create the most you know, the highest efficiency fusion capabilities of the models.
I mean, basically, models being driven to create this future world of abundance. I mean, this is some of the work that Imad Mustak speaks about with his intelligent Internet. But, you know, it’s interesting that people tend to gravitate towards competitions, and steer long term decisions towards competitions, whether it’s making more money or getting an Oscar, whatever the case might be, why don’t we use as the benchmarks the things that uplevel humanity, and get companies and teams to focus on doing those things?
ALEXANDER WISSNER-GROSS: And that, Peter, I would say in a nutshell, that’s what we’re, I think about to see in the next two or three years. Call it abundance bench or abundance benchmarking. And the more abundance oriented benchmarks we see, I would predict within two to three years of the benchmarks if they can be mechanically verified by benchmarking organizations, you’ll see a lot of those problems get solved by AI.
PETER DIAMANDIS: Yeah. I think I need to write a newsletter on this on this front. Yeah. You know, what we you know, what we measure matters, and it also influences where people spend their money, time, and their egos. Dave, you were going to say?
DAVE BLUNDIN: Well, no. I would love to see that newsletter sooner rather than later because, you know, we never used to interact with the top politicians in the world. Now they all want to know about AI. And so we have access to the state house, to the White House, and if you can’t tie it to human outcomes, then you don’t get the voters.
And historically, we didn’t think in terms of voters, right? We thought in terms of AI progress, how does it benefit the world bottom up? But now everybody wants to know what does this mean for me? Who’s going to figure that out if not between you doing everything related to healthcare and space, and then Alex doing everything related to science, physics, technology, who else is going to figure this out if not you guys?
And so if we can just tie it to like when will cancer be cured? What type of cancer? When will housing be solved? What will be the new economics? Ahmad actually has to figure out the new economics. But this is the place we got to figure all those things.
PETER DIAMANDIS: These are effectively having AI solve the X prizes of the future. I’m curious if you’re listening here or watching as one of our subscribers, I’d love to know your thoughts. What benchmarks in terms of abundance benchmark should we be creating? Yeah. Drop a note and share with us your thoughts on this.

AGI is Already Here

So speaking of Imad, here we go. Imad Mustak believes AGI is here. So I’ll just read his tweets as “AGI is already here. All the components exist. We just need to stitch them together. Two years ago, who would have said an international math Olympic gold medal and topping benchmarks isn’t AGI?”
So, Alex, you’ve been saying this for a while. I remember our first conversations. I’m saying, you know, when are we going to reach AGI? You said, “Peter, it’s kind of here already.” You were away from everybody. So talk to me about that.
ALEXANDER WISSNER-GROSS: Yeah.

America’s AI Plan: A War Footing for the Global AI Race

ALEXANDER WISSNER-GROSS: I would take the position that AGI has arguably been here since at the very latest summer of 2020, and that we’re five years into AGI. I picked summer of 2020 because in May of 2020, OpenAI released the GPT-3 paper that language models are few shot learners. And that was the first time that I think the notion that in context learning that you could pose a task within the context window of a language model and have the language model learn in the moment just in time. That was the first time that that seemed to actually work.
One could even look further back at early language models that were based on tuples, and smoothing tuples back decades. But I think when we look back from the, with the benefit of hindsight, we will say this was a more or less a smooth exponential. There was no singular moment, no before and after. It’s just a smooth exponential of compressing human knowledge and compressing world knowledge.
And I’ve drawn the analogy also that the idea that AI arguably inevitably is the result of compressing knowledge in the same sense that fusion power and other physical phase transitions are just the result of compressing matter.
PETER DIAMANDIS: Basically glad you said that, because, you know, history is always written in hindsight. Right? It’s never written in the moment. And 2020 is a very memorable, easy year for the history books to write down. But I think what you said is dead right. If you take the exact GPT-3 algorithm set and scale the heck out of it, it reaches AGI. So nothing needed to change from that 2020 innovation era to today other than much, much more compute, huge data centers, big GPUs.
And so then how would you say the birth of AGI was anything other than that moment in time? And so I think that’ll stick. Amazing. I love that idea. We just didn’t realize it at the time obviously. But you did but nobody else did. I love that idea.

AI Models Achieve Gold at Math Olympics

Alright. Some interesting news. And again, Alex, I’m going to lean on you here. So OpenAI model wins gold at the Math Olympics, achieving a score of 35 out of 42 in the 2025 Olympics, solving five of six world class programs.
So talk to us, Alex. What is the IMO, International Math Olympics, and how important is this? You’ve been saying for some time now that we’re in 2025, going to see these AI models solve math. What does solve math, solve physics mean?
ALEXANDER WISSNER-GROSS: Oh, I love this achievement. It’s incredibly exciting. So maybe just to back up a bit. The IMO, the International Math Olympiad, is the hardest high school math competition. It is the Olympics for math for high school students. When I was a high school student, I was a member of the US team on the computer science version of the International Math Olympiad.
It’s a very competitive competition, and many of the IMO winners go on to become professional mathematicians. Many of them go on to, at least, now as we know with the benefit of hindsight, found frontier AI labs or be key leaders within them. So achieving, basically solving five out of the six problems on this year’s IMO is incredibly exciting.
I’ll go out on a limb and take maybe a slightly unpopular position, at least relative to the math community, and say, I think as going back to our friend Ray and one percent of the human genome project being or human genome being sequenced indicating half of the project or more than half has been solved, I would argue that we’re actually most of the way towards math being solved, where you operationalize math being solved as superhuman AI performance, super professional mathematician, AI performance. So the point where step deeper.
PETER DIAMANDIS: What does math being solved mean to the person listening?
ALEXANDER WISSNER-GROSS: It would mean basically that the work that professional math researchers, professional mathematicians carry out can be fully automated by AI models. You just scale them and out pops new mathematical insights. And I think…
PETER DIAMANDIS: Same for physics?
ALEXANDER WISSNER-GROSS: Same for physics. So if you look at another benchmark that, where the results were recently announced, the Frontier Math tier four benchmark, and I have a number of friends at Epic AI, who are managing that benchmark. If you look at the recent performance in math benchmarks, the hardest ones, the ones that take professional mathematicians weeks to verify and or solve independently, and you look at how the performance on those benchmarks has been improving over the past few months, and if you believe in the law of straight lines, then it seems reasonable to predict that we’re going to see 20 percent of mathematician level hard problems be solvable by the end of this year at the present rate of improvement, and potentially 60 percent or 70 percent of them be solvable in the next two years. And at that point, I would argue math has basically been solved.

Quantum Computing and AI Integration

PETER DIAMANDIS: Amazing. Alex, got to ask you, like, one of the problems most near and dear to my heart is can you run a neural network on a quantum computer? And writing those quantum algorithms is notoriously incredibly hard and very, very math heavy. Does a five on six five out of six on IMO mean we’re close to AI being able to write those algorithms or not?
ALEXANDER WISSNER-GROSS: I suspect it’s correlated. I think there’s the common cause here, which is the rising tide of technical capabilities in math, physics, engineering in frontier models. I think, Dave, what you’re highlighting is maybe that we’re missing a benchmark for quantum algorithm design, and maybe this is a call to action for the world we need.
If the goal is to build the world’s best quantum accelerated foundation model, we need evals to match. Just so the audience knows, a scalable functional quantum computer is within three years according to our best experts around MIT, maybe even sooner. The software to run on it is next level. But it’s software. You know? It could be solved very, very quickly.
PETER DIAMANDIS: So what are the implications of that, Dave, if we had that?
DAVE BLUNDIN: Well, it’s yes. And nobody knows because right now, you know, right now quantum computers load very, very, very slowly, like order you know, thousands of times slower than a regular computer. But once all the stuff is in there, it processes instantaneously. What we could take, a regular computer 100,000 years to compute, it can do in seconds, but only within very specific domains.
So the question is, is optimizing a neural network one of those domains or not? And I know aspects of it are, we’ve already gotten that far, but any bottleneck kills the whole system, right? So the whole thing has to port or the hard inner loops need to port for it to be a big unlock. So nobody knows, but if math and physics are about to be quote unquote solved by AGI, maybe we’ll know as soon as, what, in the next six, nine months.
PETER DIAMANDIS: Fascinating. Alex, any prediction on that?
ALEXANDER WISSNER-GROSS: I just I would predict. Mean, so the elephant in this particular room, I think, is complexity theory. So a major challenge in the quantum information processing community has been identifying algorithms for which a provable quantum speed up or quantum advantage can be achieved versus classical computation, including versus stochastic classical computation.
So to the extent math and physics are de facto solved by frontier AI, I would hope and expect that we achieve superhuman performance in complexity theory as well. And so maybe AI complexity theory researchers will identify some new quantum advantages, some new complexity classes for which it is just obvious by construction that there’s a quantum advantage for AI.
PETER DIAMANDIS: I think I might understand what you just said, but I’m not a hundred percent sure. It just sounds like shit’s going to hit the fan and go much faster over the next few years and, you know, fasten that other seat belt you had.

Google DeepMind’s Math Olympics Success

Moving on here. So not to be outdone, Google’s DeepMind team also wins gold at the Math Olympics, scoring 35 out of 42. Congratulations, DeepMind. Congratulations, OpenAI. You know, I did a little digging on the International Math Olympics, and I found two things I find fascinating just to have a conversation on.
So this is the US team. The US team scored second in the international in IMO 2025. And if you look at the US team, it’s got six members. Four are from are basically of Chinese descent, and one is Thai. So five out of six members are Asian. The US team scored between 33 to 39 out of 42.
But check this out. The Chinese team, all six members scored a perfect 42 out of 42. And I’m just saying that there’s something in the water in China or the gene pool in China, and we’re just you know, we talk about US versus China. There is a huge intellectual capacity there that resonates towards math and computation. Comments? Thoughts?
DAVE BLUNDIN: Well I don’t think it’s anything to do with the gene pool. I mean the population is big, which always helps. But it’s the focus. I mean the education system and the culture cares about this area of endeavor. And there’s so many brilliant young students in America, and the schools are saying, “Yeah, go focus on your soccer game.” And it’s just not a focus of the education system, and so a lot of people just don’t go down this path.
So what happens is they get all the way into college, maybe all the way through college, and then they land inside of our incubation environment in Cambridge or your new setup in LA. And then they start, they’re starting there. Like, well, in China, they started at age eight. And so they’re just ahead of it. Or younger.
PETER DIAMANDIS: Alex, thoughts?
ALEXANDER WISSNER-GROSS: I was going to so just on the previous item here regarding Google DeepMind’s IMO submission, just to comment on that since we flew by that. I think it’s super interesting to look at DeepMind’s particular solutions as well as OpenAI’s. They’re all in natural language, and this is in stark contrast to what folks may remember past papers out of DeepMind on solving math. We’re very focused on formal reasoning, on first requiring that the IMO problems in geometry and otherwise be first formalized in a formal language before they could be solved.
Here, if you look at the reasoning process and look at the output, it’s all natural language. So when we start to think going back to your earlier comments, Peter, about wouldn’t it be wonderful if we had a benchmark for solving all human disease or solving unlimited energy or fill in the blank other abundance oriented topic. Historically, prior to these IMO wins, one might have reasonably suspected that we would need formalizations of all of these problems.
And that is in fact, if folks take a look at the formal conjectures repository on GitHub from DeepMind, appears to be sort of a parallel effort to formalize open conjectures and math. But what’s so startling about the DeepMind and OpenAI accomplishments here is that this is all done with natural language. There was no formalization step.
So it does, in my mind, raise the question when we start to think about solving all disease, something that various leaders in the space have talked about potentially being achieved in the next five years. Do we even need formalization at all, or could this just be solved with natural language the way human reasoners think? That, I think, it is the one of the most important ontological shocks coming out of this year’s IMO.

America’s AI Action Plan: An Industrial Strategy

PETER DIAMANDIS: Fascinating. Alright. Let’s move to a important conversation next, which is America’s AI plan. This was unveiled by president Trump just very recently, couple of days ago. The plan has 90 plus federal policies. The key moves includes exporting full stack AI tech, fast tracking data centers, cutting AI regulation. It has three pillars, and we’ll talk about those pillars.
I’ll just hit on pillar one, and then we’ll pause to discuss this. This is really an acceleration of the acceleration. So the first part is rescinding old regulations and reviewing state level rules that show AI that slow AI development. So get rid of the roadblocks. Number two, promote open source AI models for startups and research, and then, invest in worker training and retraining.
Let me just hit these, and then we’ll talk about it. Pillar two and three, streamlines data centers, chip factories, energy, projects. So, basically, you know, allow for rapid permitting to build things as rapidly as possible, and we’ll invest in nuclear and geothermal power and secure data centers for military use and exports US AI tech to our allies.
Alex, this sounds like a war footing in the global AI war.
ALEXANDER WISSNER-GROSS: Yeah. I read the AI action plan, and my sense is this is potentially the broadest US industrial strategy that we’ve seen since president Eisenhower, since the interstate highway system. And I’m also reminded, this anecdote that’s in the historic literature, that in 1939, Niels Bohr this is prior to World War Two, Niels Bohr and to the Manhattan Project, told Edward Teller, the father of the hydrogen bomb, that building an atomic bomb can never be done unless you turn the United States into one huge factory.
And it appears to me this AI action plan is more or less doing that. It is a plan to turn the US into one huge AI factory.
PETER DIAMANDIS: Dave, what do you think about it?
DAVE BLUNDIN: Well, I think we’re incredibly lucky the way the timing lines up with a single administration. I’m not political at all, but we have continuity.

The Future of AI Infrastructure and Investment

Like we’re not even one year into a four year term now. So this is the roadmap. We can at least rely on it for three and a half years, which is exactly concurrent with the AGI explosion. At least we know this roadmap won’t get overturned by the next election and thrown in the trash, you know, just because the last regulations literally, first sentence in this document throws everything that we just did in the garbage and starts over. And that’s one of the great flaws in America, right, is this lack of continuity.
But here we’re going to have continuity for the time window that matters, three and a half years. So I think it’s just an incredible miracle that David Sacks got recruited into the government and that he took the job. Because if you look at the authors of this, they’re actually really brilliant people who know what they’re talking about, which is pretty damn rare in Washington.
And so I want to get Michael Kratzios who’s one of the authors here on our pod or to join me at the Abundance Summit. You know, this represents first of all, there’s no congressional approval required. This is all being done by executive order. And so this is not a matter of if, it’s a matter of go, go, go.

Energy: The Critical Bottleneck

PETER DIAMANDIS: I mean, we’re going to talk about this in a minute, but we are so far behind the energy curve required to power our AI revolution. And we’ve heard Eric Schmidt say this in our last podcast, Dave, where we’re not chip limited in the United States. We’re electricity limited. We’re power limited. And so this is like, let’s double down on nuclear and geothermal.
I note that this action plan did not go heavy on solar, which I’m still scratching my head on because as we know, China has gone all in on solar and everything else. But the ability to just wipe away the state and federal regulations that slow things down on building, I mean, if we’re going to compete, this is the time to pull out all the stops.
DAVE BLUNDIN: Well, I would point out too that we’re not chip limited because we’re importing everything from Taiwan, but Taiwan is still manufacturing, what, eighty, ninety percent of the GPUs driving all of AI. It’s all from TSMC, single point of failure, one company, and it is such a huge national priority to get new fabs, but also new fab company, or get Intel rebuilt, but we need some diversity in that area, because we’re going to solve the energy problem. It’s an acute problem, but we’ll solve it. But if the chip supply gets disrupted by a Chinese invasion of Taiwan or otherwise, that’s going to be the real vulnerability. And I think it’s pointed out in this document. It’s not really highlighted too much, but it’s up there in bullet one as critical constraint.
PETER DIAMANDIS: Alex, other thoughts, please.
ALEXANDER WISSNER-GROSS: Yeah. Maybe the bookends to the Bohr comments that years later, it’s reported that after the Manhattan Project and after the country was in many ways, strip mined in order to facilitate collection of enough refined uranium-235, Bohr apparently told Edward Teller, “I told you it couldn’t be done without turning the whole country into a factory. You have done just that.” And I think that’s the race dynamic we find ourselves in.
And whether it’s one particular energy source or another, I think energy sources that can’t be assembled in time for this superintelligence explosion, even though they might be more ergonomic over longer time scales, if they can’t be provisioned, permitted, and deployed very quickly, they may be obsolete.

Massive Capital Investment in AI Infrastructure

Let’s watch a quick video here of president Trump on investments in this exact area.
PRESIDENT TRUMP: We’re back in Pittsburgh to announce the largest package of investments in the history of the Commonwealth of Pennsylvania, and it’s not even close. I don’t imagine it’s too close. I don’t think second is too close. That’s a big statement. This afternoon, twenty leading technology and energy companies are announcing more than ninety-two billion dollars of investments in Pennsylvania.
PETER DIAMANDIS: A lot of capital flowing in. We’ve seen capital commitments out of Saudi, out of the Emirates, out of every major tech company. You know, I was just making this comment the other day to a friend. There is almost an unlimited check being written across this converting dollars into chips and electrons.
And interestingly, it’s not just flowing into the equity market. It’s also flowing into the bond market, which perhaps not enough people pay attention to. And the bond market is absolutely enormous for fixed income. So maybe another angle here is what does the future securitization of these hundreds of billions, if not trillions of dollars of investments in AI even look like in the future?
DAVE BLUNDIN: It’s so cool to watch because most of the last twenty, thirty years has been software dominated and not building out infrastructure, but AI is actually a combination of software plus huge concrete slabs and massive hurricane proof tents, and it’s all liquid cool, so plumbing and piping, like you’ve never seen. I’ve toured a couple of these data centers and it’s a million valves. Like a million valves. And then water sense, it’s just, it’s much more like, kind of like you were saying, nineteen thirty nine or the buildup to World War II.
And so when you mentioned the bond market, yeah, all that physical infrastructure is usually funded through a combination of equity and debt. And so you’ve got bonds to issue. And it’s global too. Those bonds go out to the whole world. They’re not just a US thing. Pretty wild.

The AI Investment Vortex

I’m curious what’s not being funded. This money is being deployed here rather than someplace else. Everything is not getting funded. If you’re trying to build the coolest company ever and it has nothing to do with AI, you literally can’t get funded. You can’t even get a meeting. And it makes sense to me. I know it’s really frustrating for a lot of people, but it makes sense because the priority of this for the world is so much higher than any idea, no matter how good the idea was.
One of my business partners, Dave Massey, is really big into real estate. He builds hotels and restaurants. I was telling him, the data center build out is going to be a trillion dollars a year starting in twenty twenty nine. It’s going to ramp from here to a trillion dollars a year. Do you realize how big that is compared to anything in hotels and restaurants? It’s going to suck up all the capital.
And that’s why the startup economy is so good right now is because the US venture market’s only two hundred billion a year. It’s tiny compared to the data center build out. So all this massive amount of funding coming into AI is sucking up the startups. They’re getting acquired like Windsurf for three billion or two and a half billion dollars in year two because they’re getting sucked into this vortex that’s funded by much bigger capital pools, the bond market, the public equities market.

The Innermost Loop of Civilization

ALEXANDER WISSNER-GROSS: I think it’s also interesting to extrapolate. So today it’s energy and data centers and fabs. Tomorrow, I would reasonably expect this will include robotics and drones, humanoid robots. And there’s almost this, I often think as in part with my computer science training, what’s the innermost loop of civilization? If you’ve played the video game civilization, there’s this notion of a technology tree. Certain technologies lead to other technologies.
Well, there’s also, I think, an important notion of innermost feedback loops of civilization. Certain technologies will beget other technologies that then reinforce in a positive feedback loop. What is the innermost feedback loop of technology investment today? And I think what we’re seeing here, with hundreds of billions, if not trillions soon, of CapEx and OpEx going into AI and energy is the beating inner beating heart, the innermost loop of civilizational investment that once it achieves some threshold is going to spin out and touch much of the rest of the economy that right now is sort of being deprived of oxygen.
DAVE BLUNDIN: I love what Alex is saying about robotics too, because my daughter just moved back to Cambridge to work at Moderna, and she hasn’t been around much, and she came to a LINX studio and she said, “This has got to be the best office in the country.” It was like so heartwarming to hear that from your daughter. But we have so much energy in the building, it’s just off the charts.
But once the robotics hits, then not only will it be what it is today, but there’ll be robots, experimental robots all over the place. And I remember back at the AI lab at MIT back when I was an undergrad, all the robots used to be there. You know, now everything went to the cloud. It’s all just a bunch of terminals. But when the robots come back, it’s so fun. The energy goes through the roof.

The Rise of Humanoid Robots

PETER DIAMANDIS: You know, Dave, this year I mean, you’re going to be on stage with me at the Abundance 360 Summit in March. This year, our theme is digital superintelligence and the rise of humanoid robots. And I’m planning to have five of the top robot companies there. And I just want robots walking around all over the place so you can go and play with them.
DAVE BLUNDIN: Yeah. And they don’t just walk around. Remember the old the pogo stick robot? There’s one that used to clean up the Coke cans and eat them. But the pogo stick is just so it’s just bouncing around. You’re like, it’s fun.
PETER DIAMANDIS: And of course, we’re heading up to the Bay Area in a couple of days to go and visit 1X Technologies, making of the maker of the Neo Gamma robot, and we’ll be doing from there, which will be very cool. And we can have the age old conversation that Salim, our dear brother who’s not here, keeps on saying, why do humanoid robots only have two arms? Why can’t they have six arms? I’m going to ask that question of our host.
DAVE BLUNDIN: That’ll be fun.

The Browser Wars: Is AI Killing Google Search?

And now it’s time to talk about the browser wars. So here we go. Question on the Wall Street Journal posed, is AI killing Google search? It might be doing the opposite.
So the article states AI overviews serve two billion plus users per month helping drive record fifty-four point two billion in Q2 search revenues. I mean, first of all, the idea of fifty billion dollars in a quarter is insane. I mean, this is why Google is such an incredible cash machine. Search impressions are up forty-nine percent, facing increased competition from perplexity and OpenAI. We’ll talk about that in a second.
So it looks like, you know, this has always been an existential threat for Google, but it looks like they’re moving in the right direction. Dave?
DAVE BLUNDIN: Yeah, I don’t believe it at all. Bullet one, I absolutely positively do not believe for an instant. The search volume’s going up, but the traffic is going to the AI thing they added at the top. You can see it in the little picture there. And it’s going to cannibalize the hell out of the clickable links down below.
The reason they haven’t had any impact yet on revenue is because they’re showing more and more ads all the time down below, and they just keep ramping that up. But I don’t think Google’s in trouble. I’m not saying that. I’m saying search is moving entirely over to AI. It’s going to crush that core revenue engine at Google.
But at the same time, YouTube is growing like crazy. Google’s own AI is growing like crazy. And they have lots of opportunity to actually stay on top of the food chain. But if you look at the market caps today, remember Nvidia’s worth twice as much as Google today. So the market is telling you.
PETER DIAMANDIS: Amazing. Yeah, isn’t that nuts? If you said that five years ago, people would say you’re insane.

The Future of Search Revenue and AI Monetization

PETER DIAMANDIS: There’s no way. But people are viewing the future is entirely going to belong to AI and talking to AI and not to clickable search links. How does Google make their money when people are buying link ads? What’s going on there? How are they going to make money in the future? Any ideas?
DAVE BLUNDIN: Yeah, nobody’s figured that out yet. It’s really fun to watch the evolution. Right now it’s kind of a free for all. So if you ask the AI, “Hey, what’s the best insurance policy to get? Where do I get a cheap mortgage?” It gives you a free answer and a very, very good answer. And so that’s what cannibalizes the clickable revenue.
But they’ll find a way to charge for that or to monetize it, I’m sure. It’s very much in flux right now. We’ve got a couple investments in companies that are figuring out how to turn that into a revenue machine. And it’ll be huge. It’ll be on the order of hundreds of billions to even a trillion dollars of revenue in that channel.
PETER DIAMANDIS: And then there was a comment that Sam Altman made that OpenAI’s GPT-5 will not have its answers tied to advertisers who are paying OpenAI for that. So I am curious, if we’re going to believe the output of these large language models, are they being influenced by who OpenAI or Gemini’s or X’s customers are or not? Alex, what are your thoughts on this? Is there a revenue engine?
ALEXANDER WISSNER-GROSS: Yeah. I mean, there’s a popular narrative out there that affiliate links and referral revenue is the killer business model here, but I think there’s a story behind the story. If you’ve used O3 or some other modern, agentic model, it’s doing far more searches for you than a human could do.
If you ask it a question, it will fire off ten, twenty, a hundred searches to its fill-in-the-blank arbitrary back end generic white labeled search engine. It’s doing far more searching than I would have done if I had asked the question to a leading search engine today. So I would expect number of searches is going to skyrocket as we start to delegate the problem of search to agents. It’s just that the agents don’t click on ads. They fire off far more searches, and they also don’t click on the ads.
And by the way, it’s a major pain point how expensive grounding is. So grounding, for those who don’t live and breathe this, typically takes the form of avoiding hallucinations in the answers to frontier models by having the frontier models conduct searches and then ground their answers in the facts or information that comes back from the searches. Grounding agent results is insanely expensive, and I haven’t seen major progress, not the sort of orders of magnitude cost reduction that we see in terms of broad capabilities from frontier models for grounding.
And I think that’s actually one of the next major frontiers. If some non-incumbent wants to come in and radically improve the economics of grounding information based on search, I think they will transform the market.

AI-Driven Commerce and the Future of Purchasing

PETER DIAMANDIS: In my last book, “The Future is Faster Than You Think,” I talked about how the advertising market was going to get transformed by AI in the following way. There’s going to be a point in which I ask my version of Jarvis, “Buy me some toothpaste or buy me an outfit.” And it’s not going to be going and looking at ads to see who’s got gleaming white teeth.
It’s going to basically look potentially at my genetics or look at a whole bunch of independent data and make a purchase based upon what’s best for me, not influenced by ads whatsoever. And so that’s going to be an interesting transformation of how do you influence the AIs in an authentic fashion so they’re motivated to order the product that you want them to order.
Have you tried that, Peter? I mean, I had the experience a couple days ago of doing my first O3 product purchase. I fed it a laundry list of requirements and said, “Go find me something.” It identified top three options and presented a user interface in-line sort of a third pane within OpenAI O3 for enabling the purchase. And I made my first AI-directed in-line product purchase, and it was a seamless experience.
PETER DIAMANDIS: What did you buy?
ALEXANDER WISSNER-GROSS: A hat with a long list of requirements.
PETER DIAMANDIS: Okay. Put it up to start somewhere. Oh, that’s hilarious. I mean, I’ve used the models to make a recommended list, but I haven’t used it for inline purchase yet. But again, how we buy stuff is going to be fundamentally transformed over the next twelve to twenty-four months. Fundamental.
DAVE BLUNDIN: Yeah. Not just buy stuff, but think about travel or all your life decisions. Where am I going next? What am I doing tonight? What are my friends doing? All that’s going to go through that same machinery. And so that drives all the volume at the bars and restaurants and flights and hotels and entire economy.
Is surprise and delight. “I’m going to Portugal. Set it up. Buy everything. Here’s my budget. Just set up all my meals, my restaurants, my experiences. Oh, by the way, my kids are coming, so set stuff up that you think they like.” I mean, I think that would be absolute – it’s a level of gourmet experience that from every standpoint you could not afford right now.
PETER DIAMANDIS: Yes, I totally agree. I think life is so dominated right now by marketing bullshit convincing you that you need something that you don’t really need, that’s way too expensive – the car, the hotel, whatever. There’s so much more fun that you could have had. And life is kind of dominated by missed opportunities to have fun that just kind of passed you by.
And so I think your AI agent’s going to do a much better job of helping you with that, just because everybody thinks they need this thing. Well, why do you think you need that thing? Because it’s marketed in your face just like every time – it’s just in your face. And it creates anxiety, it creates stress, creates jealousy. At the end of the day, life can be so much better and so much easier. And I think the agents, if we don’t mess it up, the agents have every opportunity to bring that to us.
DAVE BLUNDIN: That’s awesome.

OpenAI’s Web Browser Challenge to Google Chrome

PETER DIAMANDIS: All right, moving along. The competition is coming for Search – OpenAI to release web browser challenging Google Chrome. So launching soon on Chromium with GPT-style AI to handle agentic tasks, targets 500 million weekly ChatGPT users, threatens Google ad-driven Chrome empire.
So the first thing that hits me here is – is the web browser still a thing? And why are people talking about web browsing experiences when I think that’s sort of like the year 2000? Thoughts?
ALEXANDER WISSNER-GROSS: I think at one level, we see this every tech cycle. Every major tech company needs a sovereign distribution channel. And to the extent that the browser itself – remember, Chromium was forked off of KHTML, and Safari is similarly forked off of KHTML. Every major tech company feels a strategic incentive to own its own distribution channel to the extent that the browser or the operating system or the device is that distribution channel. I think this is just par for the course.
What’s interesting to me, though, uniquely on this subject is if you play with OpenAI Operator or Google Mariner or more recently, ChatGPT Agent, you start to see the strengths and weaknesses of so-called computer use agents or CUAs. These are agents that are manipulating browsers for you. And I think we’re so painfully close to having an agent that’s able to, in real time, carry out essentially all or most economically valuable human browser tasks. And that’s the real headline.
DAVE BLUNDIN: Yeah. Peter, your point is right on too. The word “browser” is going to go in the trash can. It’s really the word “portal” might come back. Remember back in the Yahoo days?
PETER DIAMANDIS: I like portal.
DAVE BLUNDIN: Yeah. Yeah. Portal’s much better for what’s going on here. Because if you – we’ll see the new Perplexity Comet browser in a second here, but it’s a portal. It’s not a browser.
PETER DIAMANDIS: I still go back to Iron Man and Jarvis. I think they had it right. I mean, to a large degree, it’s going to be voice interface until we get to BCI. It’s going to be the ability for you to have constant screens every place deployed either on your heads-up, on your AR, VR glasses, or on screens in the home. And AI is looking at – is displaying wherever you’re looking at the time. But the idea of a computer web browser feels very last century to me.

Perplexity’s Comet Browser

PETER DIAMANDIS: All right. Let’s move on to Perplexity. And I’m not a Perplexity user per se. Dave, how about you or Alex?
DAVE BLUNDIN: Oh, yeah. You got to try this stuff. This is my fifth type stuff. I can’t. I’m paying for it. I’m paying for it.
PETER DIAMANDIS: I’m just not finding myself using it as much as others are.
DAVE BLUNDIN: So what’s Alex discovered this like always. He’s like, “Hey, you got to try this right now.” And so I go to try it. And it’s like, “Yeah, 200 bucks a month. Another 200 bucks month. Or you can wait two months.” Like, goddamn it. So it’s incredible.
PETER DIAMANDIS: So here’s the title. Perplexity launches COMET, an AI-powered web browser. So talk to me about this, Dave. Is this important?
DAVE BLUNDIN: It’s important because you can start there. Apple and Google try to intercept you before you get to Perplexity. So Perplexity fights back by saying, “Look, install this on your laptop and just throw away Safari and Chrome, and you can just start here.” And then when you start there, it’s all AI all the time. It’s beautifully integrated. It’s really, really well designed.
And you can see, because they’re not worried about cannibalizing their search revenue like Google is, they can actually make a really clean from-first-principles design that’s AI first, and it gives you search results when you want search results. It gives you AI when you want AI. It shows you where things are going to go. So it’s worth trying just for that reason alone.
ALEXANDER WISSNER-GROSS: Yeah, and maybe add to that. So one of my favorite challenge problems today in middle of 2025, when I encounter a new computer use agent, and I would classify Comet as one of them, is I ask it to win at a game of chess. And some CUAs will get most of the way toward actually winning a single player game of AI chess against a web competitor. Some will outright refuse to. When I try to persuade Comet to do it, it usually refuses. I can nudge it along. It’ll play part of the game. ChatGPT Agent or Operator will usually get most of the way through a game of chess.
So where I think all of this goes is it’s more than just a browser. Totally agree, Peter, that browser is almost like a straw man template that we just have to pass through to get to solving the real problem, which is full vision, language, action, or VLA models that are able to solve general purpose challenges out in the physical world.
But this is a waypoint almost. Before we can have those humanoid robots that are doing our laundry and cooking our food and solving all the problems in the physical world, I think it’s a necessary midpoint to have an agent that’s able to accomplish economically useful tasks in the browser.
PETER DIAMANDIS: Nice. Yeah, we should get Arvind on the podcast here. He’s showing some serious Steve Jobs capabilities. Because everybody would have said, “Well, look, he’s really brilliant, but he doesn’t have his own foundation model and OpenAI and Google are just going to crush him.” But now he’s got a lot of capital. He’s got incredible insights and he really understands the consumer experience.
And so it’ll be fun to track his journey because there’s a version where he gets annihilated by the big guys. There’s a version where he emerges like Steve Jobs and rises to the top. It’ll be a fun drama to track. But try Comet in the meantime. You’ll get a sense of, “Wow, it really is much better.”
PETER DIAMANDIS: I’ll play with it.

AI Writing Code: The Path to Recursive Self-Improvement

PETER DIAMANDIS: All right. Here’s the next note. AI writing 50 percent of Google’s code and we see here a chart, basically over the last two years going from 25 percent of the code being written by AI to now topping out at 50 percent. This particular article points out that Amazon is writing 25 percent of its code, Microsoft 25 to 30 percent, Robinhood 50 percent. Alex, is this just predictable? Are we going to get to 100 percent soon? What’s this mean to you?
ALEXANDER WISSNER-GROSS: It is tantalizing and riveting. What these numbers don’t tell us is what percentage of time is being saved by human developers, and that would be a more direct indicator of how close we are to recursive self-improvement. If we’re nearing 100 percent time savings, then the AI is writing itself at this point.
So we don’t actually know based on these numbers. Is it the 50 percent most boilerplate-esque portion of the codes that’s being written by AI, or is it the 50 percent most valuable? But either way, I think recursive self-improvement is imminent. It would just be lovely to have a more direct indicator of that.
PETER DIAMANDIS: So let’s talk about that one second. Recursive self-improvement is when AI is rewriting its own code. And there’s a lot of science fiction movies based upon that, and that’s where shit hits the fan and goes sideways for us.

AI Safety and Regulation Concerns

PETER DIAMANDIS: So should I be concerned about that, or is that just an inevitability? I remember the three “thou shalt nots” in the early days of AI: thou shalt not allow your AI onto the open web, thou shalt not allow recursive self improvement, and then there’s a third one because there’s always three. But how do you think about this, Dave?
DAVE BLUNDIN: Well, you should not be worried if we have a handful of brilliant people in government who understand this. Otherwise you should be worried sick. Because there are very straightforward guardrails that don’t slow down AI progress, that keep us competitive with China and the world, and that don’t slow down the military aspects of this, which are critical. The military part has to keep up and be ahead of the world too.
You don’t need to slow any of these things down while still guardrailing the system. Because in the movie version of it, it’s pretty accurate, just like Jarvis is pretty accurate. If you let it create its own objectives and you let it design its own next parameter set around objectives that you didn’t give it, then it becomes this kind of out of control conscious thing that we absolutely don’t need.
The world doesn’t need it. Society will never need it and it’s dangerous as all hell. But you can prevent it with just some straightforward rules while still getting all the benefits. And I think if there’s a handful of very, very smart people who understand that, then there’s nothing to worry about.
PETER DIAMANDIS: Alex, do you believe that? Do you agree? Or do you think it’s up into the right and out of control?
ALEXANDER WISSNER-GROSS: I tend to be more on the accelerationist side as opposed to what might be called the safetyist camp. My worry tends toward worrying about over regulation. I think we have an opportunity to maybe by analogy with explosives – to the extent we’re expecting an intelligence explosion, I think of this almost as a shaped charge, and we have an opportunity to shape the explosive charge here in a positive direction.
But I also think of all the downsides of not achieving superintelligence and not solving all of the major outstanding problems in a timely fashion. And I think on balance, if recursive self improvement in a thoughtful but not overly hamstrung way buys us solutions to the grand challenges of the universe, I would tend to prefer that future.
And it’s not a matter of it not happening – is it going to not happen in the US versus not happen in other parts of the world? Right? This is not… I think that’s the element of the shaping. So what values do we attempt to imprint on it? National values, cultural values. I think all of this is at play.
But I also think if you look back a decade or two, many of the people who were thinking about AI safety were maybe thinking too unambitiously at the level of having a single human align a superhuman AI. That was never going to work. What’s actually happening arguably is all of humanity through government, through multi-corporation competition, through lots of individual leaders and researchers. It takes an entire civilization to align an AI, not an individual, and that’s what we’re seeing.
PETER DIAMANDIS: Love it. Love it. By the way, I want you to take a second if you haven’t yet and give me some comments on Alex’s brilliance as a member of our mindset mafia here and our moonshot mates. Alex, thank you so much for commenting on this. I love the way you think.

The AI Talent Gold Rush

PETER DIAMANDIS: Alright. Here’s another comment. NVIDIA is making more billionaires than anybody else in the world. Let’s listen to Jensen.
“We see this capital being applied to human capital in a way that we never thought was possible. It used to be NBA players signing three hundred million dollar contracts. Now it’s model researchers. And then there was a post this weekend that said that there was a person that was offered a billion dollars over four years by Meta. Now if that’s happening at this layer, why hasn’t it happened at your layer? Because you are the enabler of all of that. And how do you think all of this human capital is going to actually play out?”
“First of all, I’ve created more billionaires on my management team than any CEO in the world. They’re doing just fine. Don’t feel sad for anybody at my layer. Yeah. Everybody’s doing okay. Yeah. My layer’s doing just fine. The big idea though is that you’re highlighting is that the impact of a hundred and fifty or so AI researchers can probably create with enough funding behind them, create an OpenAI. Hundred and fifty people. Yeah. DeepSeek’s a hundred and fifty people. Moonshot’s a hundred fifty people. Right. If you’re willing to pay, say, twenty billion dollars, thirty billion dollars to buy a startup with a hundred and fifty AI researchers, why wouldn’t you pay one? Right.”
Incredible. Dave, how do you think about it?
DAVE BLUNDIN: Well, I think the story within the story is that the equity upside is dominant versus the salary. These NBA style signing bonuses are making news, but the founders of the companies and then the acquisitions have already created much bigger numbers through the equity upside. There’s a lot of research that shows that in the future, because AI does so much of the work, that ownership of equity stakes, ownership of physical assets generates ninety percent of the wealth in the world and not your day job. And so I think Jensen’s just saying that in another way.
PETER DIAMANDIS: Alex?
ALEXANDER WISSNER-GROSS: It’s funny. I have friends who tell me that they literally live off of their NVIDIA stock holdings. I think we’re lucky to live in a world where accelerated compute, which is creating enormous amounts of wealth for humanity gets rewarded. We’re lucky to live in a world where markets reward that wealth creation and incentivize it. It’s wonderful.
PETER DIAMANDIS: Amazing.

The Evolution of Video Generation Models

PETER DIAMANDIS: Alright. I love this. This is more on Google’s VO3. They’ve created a mechanism that allows you to draw as an artist would on a video frame and have the VO3 model actually implement it. Let’s take a quick look here.
So on this frame, the artist is drawing a role of Aurora Borealis, and of course, it’s instantly emulated on the video. This is VO3 – it’s still one of the incredible shockers out there. And Alex, I appreciate all the VO3 videos you keep sending me via text. Thank you. I love your Star Trek and space theme.
What is real anymore? Yeah. So how far are we, Alex, from the first VO3 generated full feature film?
ALEXANDER WISSNER-GROSS: It may exist somewhere already, and not be evenly distributed. I think one of the biggest shockers for me from this story is how close we seemingly are to a final convergence between text based language models and diffusion based or diffusion transformer based video models.
And it makes me wonder what does the final model look like? It seems like the entire space, all of these different model architectures are starting to converge. And what does the final converged architecture look like? And part of me wants to think it’s going to look like a massively multimodal model that handles text and video and audio and DNA, and raw machine data and many other modalities.
But at the same time, if you remember the architecture wars in computer software engineering – should we have a microkernel architecture? Should we have a monolithic kernel? I think it’s going to look like a microkernel that handles every single modality, and we start to see that here with in-frame visual prompting, visual text prompting of video output. We’re starting to see the glimmers of sort of the ultimate transfer learning between modalities, and it’s incredibly exciting.
PETER DIAMANDIS: I mean, it’s the humanization of the process. Right? So we’re connecting, and giving guidance as we would to a person, without having to go through, specifically, code to enable what we want to see. Why stop there? Why not add human thought via brain computer interfaces as yet another modality? I don’t think it ends with text and video. I think this goes all the way to the end game.
ALEXANDER WISSNER-GROSS: I actually hate – among MIT alums, I least like the idea of plugging something into my skull and communicating directly. Or I don’t like the upload either. Just very different.
PETER DIAMANDIS: Can’t wait to plug in personally.
ALEXANDER WISSNER-GROSS: Okay, well you plug. I’ll see how it goes for you.
DAVE BLUNDIN: This video though, this is emergent behavior. This is not built by some software engineer. Hey, you can annotate now. This is emergent from the model itself. You’re going to see more and more of that where you can do things with these capabilities that the authors didn’t even know you could do. And so it’s really empowering for the user, the creator, to say, “Hey, I discovered that you can use VO3 to do this thing. No one even knew.” You’re going to see more and more of that. It’s actually pretty cool. So we’ll track them as they come out.
PETER DIAMANDIS: Awesome.
ALEXANDER WISSNER-GROSS: One other comment on that just is, you’ll hear the term “world model” often. Arguably, a model that understands all the physics of the real world and can obey text instructions and transfer between them starts to be a true world model, and that’s also very exciting. It’s Star Trek holodeck level.
PETER DIAMANDIS: I love that. And yes, I can’t wait for my holodeck.

AI Companions and Teen Mental Health

PETER DIAMANDIS: Alright. So here’s the next article up. Nearly seventy five percent of teens have been using AI companions. So seventy three percent of teens age fourteen to seventeen have used an AI companion. Thirty seven percent have shared personal secrets, and teens using AI companions are twice as likely to feel depressed or lonely.
So I have two fourteen year old boys, and we’ve made a decision that they’re not going to have a cell phone until they’re sixteen. They do have computers, but they have not been playing with AI companions yet. I mean, this gets very scary in some ways of breaking a normal socialization loop, which is so important for mental health. Dave, any thoughts on this?
DAVE BLUNDIN: Yeah. This is so easy to fix, but the problem is that if you build a video game or you build a virtual environment, your incentive is to trap the person, try and get them to spend their entire day inside the game. Because that’s where you generate more revenue, more addiction, more fees, more cross sells.
But it’d be much easier for the AI to say, “Hey, it’s time to take a break. It’s time to go outside. It’s time to get some sunshine and some vitamin D.” Easy to build that in. It’s just not in the incentive of the creator to do it.
So now you get these super engaging AI companions that are literally like a soulmate. And they’re so nice to you. They listen to every word you say. They hang on every word. They laugh at your jokes. And that creates a really slippery slope.
I didn’t know it had gotten to this level already. I was telling the kids this the other day. They’re like, “No way, no way.” I’m telling you, it’s in the data. And it’s not like it’s a survey. People log in, you just count the logins. I’m pretty sure it’s right. So it’s crazy how quickly this has happened.

AI Agent Mishaps and Learning from Failures

PETER DIAMANDIS: Alright. Next up, Replit CEO apologizes after its AI agent wiped a company’s code base. Oh my god. Ouch. It’s like, how does that come across, Dave? It’s like, “Oops. Sorry about that.”
DAVE BLUNDIN: I had the same thing actually. Literally just had the same thing. I put Cursor into full agent mode and said, “I’m just going to walk away for an hour or two.” I came back and it absolutely obliterated everything I was working on. I had it all backed up though. But I was like, “Wow, okay, I can see how this gets off the rails in a hurry.”
So yeah, you learn to trust it really quickly and then you step over the line very quickly too. But it’s so capable. It seems so trustworthy for a minute. Any other comments you want to make on this other than back up your data? Back up. It’s so cheap to back everything up every ten minutes. I would just do it, man.
ALEXANDER WISSNER-GROSS: And then maybe just to comment on this. I mean, my friend, John Smart, likes to point out speculatively that the first generation of humanoid robots will accidentally fold the cat in the laundry. That this is the moral equivalent, arguably of folding the cat in the laundry.
This is where Nassim Taleb’s notion of antifragility comes from – without going through a few of these sort of localized moral panics, “Oh, no. The AI agent wiped the code base,” we won’t get to a stable, robust system over the medium to long term. So ironically, I think these sorts of micro panics end up being net healthful for the ecosystem in the long term.
PETER DIAMANDIS: Got it. Alright. Here’s a piece of abundance news, and I love this. Nigeria accelerates learning with AI.

AI in Education: Nigeria’s Success vs. US Stagnation

PETER DIAMANDIS: So Nigerian GPT-4 pilot delivers two years of learning in just two weeks, 1,200% faster. So, you know, this really pisses me off. I mean, I’m excited about this for Nigeria and for other African nations, but, you know, here in the United States, I’m not seeing the adoption of AI in learning anywhere near as fast as we should. When are we going to see that, Dave?
DAVE BLUNDIN: Look, if it doesn’t come from us, I don’t see it coming from anywhere. I think it’s just going to bypass the incumbent education system at this stage, because we’ve had so many meetings and seen no motion whatsoever.
So the students want to learn. The students will work around whatever school and just learn on their own. One of the good things is our schools are giving virtually no workload to the students. So they have plenty of time to—
PETER DIAMANDIS: Oh my God. That’s so true. I just remember homework all the time, grade letters, and where did that go? I don’t know.
DAVE BLUNDIN: All right. Well, it’s going to be good in the sense that they’re free to pursue their own. We’re seeing in Nigeria now, we’ve seen it in Estonia. China is all in on AI. The US, you know, talks about it, but still not seeing anywhere near enough. And I hope that’s going to become part of the conversation across all school systems. And, you know, the teachers’ unions can’t block this. We’re going to have the best educators are going to be AIs, and it’s going to be immersive education.

China’s Solar Energy Dominance

PETER DIAMANDIS: Alright. Let’s talk about China. Let’s continue on our conversation about US versus China. And this week, I want to talk about energy. So this is an incredible chart that shows growth of solar in China.
And so here’s the article. China’s installed 464 gigawatts of solar capacity in just the last twelve months since June. That’s epic. And while we’re focused on natural gas and coal and nuclear, which is unfortunately kind of slow, China’s just covering their countryside in solar, and I don’t understand why we’re not doing the same. Any thoughts?
ALEXANDER WISSNER-GROSS: I think it’s interesting to think about—so we see this quadratic, maybe exponential curve there. It’s interesting to ask where this goes. Arguably, if we find ourselves in a call it a solar superintelligence future, this leads inevitably to a Dyson swarm. We ring the sun from all directions with supercomputers.
PETER DIAMANDIS: Had to bring the Dyson swarm in.
ALEXANDER WISSNER-GROSS: I don’t think that’s likely. I think if solar superintelligence were the end state, we would probably have observed lots of Dyson swarms throughout the galaxy already. They would stick out like a sore thumb in infrared. We’re—to my knowledge, we’re not seeing them. And so that suggests to me this exponential growth of solar as a critical path to superintelligence suggests that it doesn’t scale all the way to post superintelligence, assuming this horizontal exponentiation happens.
It seems more likely that there will be other energy sources. So I think—
PETER DIAMANDIS: So much left on the table. Right? So I’m a pilot. I fly out of Santa Monica Airport here. I’m flying up the coast, and I’m looking out, and I see all of these rooftops just, you know, that could be solar producing. In the interim, I don’t get why we’re not pushing that here. Dave, what do you think about solar? Are you investing at all in this area?
DAVE BLUNDIN: No. I’m not, but I think that’s the problem. I should be, but I’m not because physics is about to be solved and I don’t know if fusion’s going to come online very, very soon. This is why America gets its hands all tied up because the investors won’t pour the money in out of fear that some other innovation will disrupt it before the payback. Meanwhile, China doesn’t worry about that because it’s all government funded.
And so there’s literally 200 gigawatts worth of solar panels sitting in warehouses that we could actually buy and deploy in our sunny Utah kind of Colorado or Nevada areas. And nobody wants to take that risk, but it’d be a good move because, just as a hedge, it’d be a good move, but investors won’t do it.
I mean, we talk about, you know, SMR, small modular reactors, generation four fission reactors, but we’re talking about those coming online, like, a decade from now. And fusion, you know, we’ll see here.

Chinese Fusion Breakthrough

PETER DIAMANDIS: Here’s the next article. Chinese fusion reactor sets record keeping a superheated plasma for 1,066 seconds at 180 million degrees Fahrenheit. Goddamn. It’s hot.
ALEXANDER WISSNER-GROSS: Fusion—remember, fusion’s not—I mean, it sounds futuristic to our 2025 years, but it’s actually not that efficient in terms of rest mass. So light element fusion consumes less than one percent of the rest mass of the reactants. If we solve physics in the next few years, we can do way better than fusion.
We could be building conceivably micro black holes and dropping matter into them and harvesting the Hawking radiation and the rest mass. Are many things that we could be doing if we’re about to solve physics.
PETER DIAMANDIS: So I can see that. Black hole reactor in my backyard, please. What could possibly go wrong?
ALEXANDER WISSNER-GROSS: Yeah. It’s mister black hole instead of mister fusion. It could happen.

Solving Physics with AI

PETER DIAMANDIS: So let’s talk about that. Solving physics. Alex, what’s your vision of solving physics mean? Oh, this is so cool.
ALEXANDER WISSNER-GROSS: I think it comes down to discovering new physics with AI. And I think we’re, maybe only a few years away from solving physics the way, earlier I mentioned this notion of solving mathematics in the sense of achieving professional mathematician level AI. I think we’re only a few years potentially away from achieving professional physicist, both theoretical and experimental AI that can unlock new physics.
If there is new physics to be found and we have strong observational evidence that there is across many different sub disciplines of physics. I think our best shot for a field arguably, especially at the fundamental level, in fundamental physics where we haven’t seen major fundamental new physics in the past half century—maybe AI is our best shot at unlocking new fundamental physics.
PETER DIAMANDIS: I think we need you to make a movie so that I can get what that mean—how that’s going to play out. You know how Jarvis kind of—it seemed like total science fiction, now it’s absolute reality, but it opened our mind to how this was going to work. The equivalent for solving physics. I just can’t—the AI will know things but have—I’ll have no ability to understand what it’s trying to say to me. I can’t comprehend string theory as it is, You know?
DAVE BLUNDIN: Well, when you start getting gravity shielding and you’re floating up to low Earth orbit and you’ve got all the energy you need, you’ll understand the implications of it. Totally.
ALEXANDER WISSNER-GROSS: And I think it’s—in the popular discourse of superintelligence, everyone is so focused on just racing to the destination. They’re going to be the proverbial dog that catches the car and wonders, so what comes after we have superintelligence? I spend a lot of my time thinking about the day after superintelligence, and I think the day after looks like solving math, physics, chemistry, biology, medicine, a bunch of other fields, and then unlocking solutions to the grand challenges that we face en masse.
PETER DIAMANDIS: You see, the biology and medicine I totally get, because its only purpose is to give us longevity, happy, healthiness, and that is really clear. People are just healthy and happy. The physics side of it, it’s just like, okay, it’s discovered things beyond quantum. It’s discovered things beyond, and it’s trying to explain them to us, and it’s building things, you know? But I can’t predict what it’s going to build next because I don’t get it. That’s the part I’m just really—
ALEXANDER WISSNER-GROSS: I don’t think it’s going to be understanding the fundamentals of the physics it creates. I think it’s going to be experiencing the breakthroughs that it creates in the physical universe for us. And that’s going to be absolutely fascinating.

Robotaxi Competition: Uber vs. Tesla vs. Waymo

PETER DIAMANDIS: Alright. Some quick stories on robotaxis as we near our end here. I think it’s important to note Uber has invested $300 million in Lucid EV. So this is a deal for Uber to basically purchase 20,000 of Lucid’s Gravity EVs over six years, aiming to challenge Tesla and Waymo. So interesting. Right?
So Waymo is beginning to roll out, and there’s—you know, every three minutes, there’s probably two or three Waymos that pass me by here in Santa Monica. We’re seeing a slower rollout to the robotaxis from Tesla, and there is space for a third. And so it looks like Uber’s coming in with Lucid EV. Any thoughts on this, Dave?
DAVE BLUNDIN: Yeah, it’s interesting. There’s a constant tension between user base. Uber has the user base, but doesn’t have the foundation model, needs to partner for the tech. But then, you know, on the other end of the spectrum, Elon has the tech and he’s building out, you know, the user base.
But you see Sam Altman really being the, okay, I’ve got a foundation model company, now I need to control all these user touch points. So I need Johnny Ive, I’m going to build a consumer device, I’m going to get into the browser wars. So it’s starting to look like the foundation model user base vertical integrated monopoly is going to become a real thing.
PETER DIAMANDIS: Yeah. So Uber’s got to get in the game though. I mean, don’t have any of the underlying tech, but they’ve got the user base. And there’s been a lot of conversation over the past. Does Google buy Uber? And, you know, who’s at play here? But there’s going to be some kind of consolidation or some kind of extension. Alex, any thoughts here?
ALEXANDER WISSNER-GROSS: These are mobile data centers on wheels. I think that the metaphor that autonomous vehicles or AVs are smartphones on wheels is misplaced. These are data centers, micro data centers on wheels. So I think this is—we’re seeing the deck chairs all move around in a game of musical chairs, and I think we’re starting to see the emergence of a new class of mobile distributed data centers.
And what’s missing in my mind is algorithms, training algorithms, and inference time algorithms that can take advantage of all of this compute that right now is being used for autonomous driving, but could in principle be generalized to having mobile distributed data centers. And this is Uber and maybe other companies finding themselves slowly into the mobile data center space.
PETER DIAMANDIS: Well, you know, Dara Khosrowshahi, the CEO of Uber is an engineer. Fundamentally, always has been an engineer. So we should get him on the pod and—
DAVE BLUNDIN: Well, asked him and he said he’s very happy to come on the pod.
PETER DIAMANDIS: We should have him join us and talk about his vision here. You know, it’s interesting. Uber was so early in this game. You know, they—I remember they were early in with Travis really supporting the build out of autonomous vehicles. Also flying cars. Right? They did a lot of the earliest work in eVTOLs. And then they’ve fallen back when Travis left, they’ve fallen back to fundamentals. But time to start growing again. Otherwise, they’ll get displaced.
DAVE BLUNDIN: Yeah. They were under pressure from their shareholders to show profitability for a while there when Dara came on board. They cut a lot of the really interesting R and D. But I think now it’s obvious they should get back in the game. It’d be a great time to get Dara and pick his brain.
PETER DIAMANDIS: Sure. But he’d be awesome. He’s brilliant.

Tesla’s Retro-Futuristic Diner

PETER DIAMANDIS: I love this. I gave it a title, “Take Fries with a Model Y.” This is the rollout of Tesla’s diner here in Hollywood. We’re going to close out on this story. Of course, there is an image of Optimus serving you popcorn.
So this is a 24 hour a day, seven day week diner in LA combining American food and drive in movie experience. Orders are auto triggered on your Tesla when your Tesla nears the diner. It’s a massive EV hub with 80 version four superchargers. I haven’t gone yet, but, Dave, when you come here and visit, we should make a side junket for sure.
Alright. You know, my—here’s my question. Why is Elon the only one who’s building out our vision of the science fiction world? You know, it’s so frustrating.
DAVE BLUNDIN: Yeah. He’s the guy. Building in public is the new rage, and being a public figure while you’re creating this stuff, it works. It attracts talent, it attracts capital, and it works so well in this hyperacute, really fast environment, and so few people are doing it. But around the incubator, a lot of the teams have embraced it, and they’re doing it.
But yeah, I guess it’s just because the older CEOs, you know, the incumbent CEOs never thought that way. And they just aren’t getting on the bandwagon. Some of them aren’t very good at it either.
PETER DIAMANDIS: Just curious. You know, the taste makers that make this stuff happen for Tesla is extraordinary. I mean, what I’ve seen about the details, the diner, I mean, I love it. This is like, you know, we’re finally getting there. We’re finally getting to this fun future world. Alex, you want to come visit and go for a burger?
ALEXANDER WISSNER-GROSS: Would love to. Maybe a veggie burger. I love the retro futuristic aesthetic. I grew up on 1950s era, golden age of science fiction reading. And this reminds me of the era when cars had tail fins and people were excited about the future, future transportation, future energy. Energy was going to be too cheap to meter. This evokes all of that, and hopefully, find ourselves in the near future when energy and intelligence are too cheap to meter.
PETER DIAMANDIS: I love it.

The Future of AI Investment and Innovation

PETER DIAMANDIS: You know, when we have the Abundance Summit in March, and those of you who want to learn more, you can go to abundancethreesixty.com to learn more. It’s going to be our thirteenth or fourteenth year. I’ve committed to running the Abundance Summit for twenty-five years. And for me, it’s showing people what happened in the past year and where things are going in the years ahead. And one evening, Dave, during the Abundance Summit, we’ll have a WTF episode sort of recapping what happened in the year and what happened in the last couple of days.
But, Alex, you were going to join me last time you came on via Zoom. Maybe you have to come here.
ALEXANDER WISSNER-GROSS: I’ll be there in person this time.
PETER DIAMANDIS: All right, I love that. I love that.

Link Ventures: Riding the Golden Wave

So before we break off, Dave, what’s new at Link Ventures?
DAVE BLUNDIN: Oh God, it’s just such a golden time. I really hope people don’t take it for granted because you know, I was around in my twenties when the internet exploded and everybody was succeeding and people were running a million miles an hour. And then I think a lot of people under appreciate how incredibly slow the period from two thousand and four to twenty twenty was relative to what’s happening today.
But around Link Studios, people are literally running from meeting to meeting, running to the bathroom and back, just trying to keep up with the pace of change, reading Alex’s feed every week. It’s like a full-time job, just trying to keep up with everything going on.
But we had the Cambridge cops come in the other day and one of the guys from one of the CEOs came up to my door and said, “Dave, why are the Cambridge police in the building? What’s going on?” So I went out, talked to them and they said, “Someone’s trying to break into your building. They’re climbing the bricks on the outside wall, somebody in a plaid shirt.”
I said, “I don’t think criminals wear plaid shirts very often. I’m not sure I buy into this.” Turned out one of our Harvard teams, all the math majors are rock climbers, and they like to do crazy things. So one of the math majors cracked open a second story window, and he’s just scaling a brick wall.
And the team was like, “We can’t have that.” I was like, “This is going to make a great movie someday.” I mean, these guys are going to be the ones that are the next Mark Zuckerberg social network. Actually the Sam Altman movie will be coming out soon too.
So I feel like I’m just living in this community where it’s all happening. So it’s just a golden time. And like I said, we’ve had one hundred percent success rate on our investments in these teams. So I’m surprised there isn’t more capital just realizing that this is the moment in time every one of these teams is likely to succeed because the tailwind is so strong. So just get invested.
And I’m not sure you’ll be able to get invested three, four, five years from now. It’s kind of now or never.

Investment Opportunities in AI

PETER DIAMANDIS: One of the things I find fascinating is most people when they ask me where can I invest in AI, it’s typically all the public companies. They don’t have access to the deals at the beginning when they’re recently valued, when they’re below nine billion dollar valuations.
DAVE BLUNDIN: Yeah. It’s crazy. Our entry valuations are same place they’ve always been. And then the step ups are like nothing I’ve ever seen. So just in the first few months.
PETER DIAMANDIS: What are the entry valuations?
DAVE BLUNDIN: It’s inflated just a hair, but you know, ten, fifteen, twenty million kind of founding day valuation, team of three to five. They just got the idea, but they really want to move into the lab and circulate with all the other teams. And so we’ll put a couple million bucks in to liberate them from having to go work a day job and get them focused.
But normally from there to significant revenue would have been two years, three years. Now it’s like two months, four months. I cannot even describe the difference versus just four or five years ago.
PETER DIAMANDIS: And how many folks in the incubator space?
DAVE BLUNDIN: It’s packed now, but we got another floor so we can start filling that out. But we have twenty-six companies. Their biggest one is twenty-four, twenty-five people, and smallest one is three. So I’d have to count, but it’s packed.

Predictions for the Coming Months

PETER DIAMANDIS: Alex, prediction for the next couple of months, what are you seeing coming soon?
ALEXANDER WISSNER-GROSS: I want to see several new state of the art frontier models. I want to see, ideally, companies or competitors come forward with IMO Gold. That would be amazing to have a competitive ecosystem like that. I’d like to see ideally at least one grand challenge level problem in math or some other physical science get solved by AI. I’d be a very happy camper if some or all of those predictions happen.
PETER DIAMANDIS: And Alex and I are working on that new team that’s going to be the first to figure out whether neural networks can be trained on quantum computers. We have a very profitable pathway from point A to point B, we’re mapping out. So Alex is going to find me three or four of the smartest people on the planet to work on that project.
ALEXANDER WISSNER-GROSS: Love it. Love it.

Closing Thoughts

PETER DIAMANDIS: Well, thank you for another great WTF episode. Alex, a real pleasure to have you. I hope we’ll have you back on a regular basis. Our love and appreciation to our missing moonshot mate, Salim Ismail. Wherever you are with your son, I hope you’re having a fantastic time.
And everybody, remember, this is the real news impacting our world. It’s not about politics. It’s about technology that’s shifting our industries, our companies, how we teach our kids, how we run our nations. It’s an extraordinary time to be alive. Remember, don’t blink because it’s moving that fast.
Every week, my team and I study the top ten technology meta trends that will transform industries over the decade ahead. I cover trends ranging from humanoid robotics, AGI, and quantum computing to transport, energy, longevity, and more. There’s no fluff. Only the most important stuff that matters, that impacts our lives, our companies, and careers.
If you want me to share these meta trends with you, I write a newsletter twice a week, sending it out as a short two minute read via email. And if you want to discover the most important meta trends ten years before anyone else, this report’s for you. Readers include founders and CEOs from the world’s most disruptive companies and entrepreneurs building the world’s most disruptive tech.
It’s not for you if you don’t want to be informed about what’s coming, why it matters, and how you can benefit from it. To subscribe for free, go to diamandis.com/metatrends to gain access to the trends ten years before anyone else.
LATEST POSTS:
RECOMMENDED:
CATEGORIES:
MISSION STATEMENT:
Our mission is to provide the most accurate transcripts of videos and audios online.
Become our Friends!

Copyright© 2025 The Singju Post
MISSION STATEMENT:
Our mission is to provide the most accurate transcripts of videos and audios online.
Copyright© 2025 The Singju Post

source

Jesse
https://playwithchatgtp.com