Old AI is beating new AI. Here's why — and why it matters – Quartz


David Paul Morris/Bloomberg via Getty Images
Meta made more than $18 billion in profit last quarter, but it wasn't the kind of AI everyone's talking about that drove the windfall. While CEO Mark Zuckerberg spent billions building "superintelligence" infrastructure and chasing dreams of generative AI, the company's actual revenue gains came from old-school machine learning — also known as the same recommendation algorithms that have quietly powered Facebook's and Instagram's ad targeting for years.
As the AI boom continues, a growing disconnect has emerged. On one side is where the hype and money are flowing: toward generative AI and large language models. On the other is where companies are actually seeing results. While billions pour into ChatGPT-style technologies, traditional, less splashy, non-generative techniques continue to quietly power everything from Meta's advertising profits to personalized medical diagnostics to rocket engine design.
This divide is creating both overlooked opportunities for investors and a mismatch between the AI that's getting funded and the AI that's actually working.
The numbers from Meta's second-quarter earnings call tell a stark story. Meta CFO Susan Li was blunt: "[W]e don’t expect that the GenAI work is going to be a meaningful driver of revenue this year or next year,” she said. Meanwhile, the company's traditional machine learning systems delivered a roughly 5% increase in ad conversions across Instagram and 3% across Facebook.
The difference boils down to what these AI systems do. The generative AI getting attention today — large language models like ChatGPT and image generators like Midjourney — creates new content by predicting what should come next. Everything else represents a vast toolkit of different technologies, including neural networks that classify images, algorithms that detect fraud, systems that recommend products, and models that optimize supply chains. Generative AI is relatively new, while these other approaches have been developed for years, and in some cases, decades.
The newness of generative AI explains both the excitement and the limited results so far. When discussing Meta's experiments with Llama 4 to build autonomous AI agents, Zuckerberg acknowledged the work is "happening in low volume right now, so I'm not sure that result, by itself, was a major contributor to this quarter's earnings." The trajectory, he said, is "very optimistic.”
That optimism is shared across the investment world. Private investment in generative AI reached $33.9 billion in 2024, up 18.7% from 2023 and more than 8.5 times higher than 2022 levels, according to Stanford's AI Index report. The sector now represents more than 20% of all AI-related private investment, which itself hit a record $252.3 billion. The U.S. dominates this landscape, with American private AI investment reaching $109.1 billion, almost 12 times higher than China's $9.3 billion.
But the venture capital community's intense focus on generative AI doesn't mean traditional machine learning has been abandoned entirely. The funding landscape is more nuanced and reveals the complexity of investing in a rapidly evolving technology space.
"This really feels like a sea change and we are still in the early days," said Amy Cheetham, an early-stage investor at Costanoa Ventures, comparing generative AI to previous technology shifts like the internet, mobile, and cloud computing. "It's going to be really hard to build any sort of company without integrating some component of it somewhere in their product, or in their internal tooling."
The issue is that generative AI's success creates its own momentum, growing at rates the world has not seen before with fewer people. That makes companies irresistible to investors, but it also means some firms that were founded after the generative AI boom picked up steam in 2023 are getting overlooked — even if some should be getting more attention.
“I think a lot of them are getting passed on by investors for not the best reasons, and it's just because there's something slightly sexier in the generative AI category,” Cheetham said.
Some sectors never abandoned the old approaches. Medicine has been using traditional AI for decades. Neural networks for cervical cancer screening were being tested in clinical trials as early as 1995.
Recent research continues this tradition. One study analyzed 20 years of blood test data from tens of thousands of patients to create personalized "normal" ranges instead of the one-size-fits-all reference intervals doctors currently use.
"What's normal for you may not be normal for someone else," said Brody Foy, an assistant professor of laboratory medicine and pathology at the University of Washington, who conducted the research.
The preference for proven approaches reflects medicine's unique constraints. Patient privacy laws like HIPAA create additional hurdles for any new technology handling health data. More fundamentally, medical errors can be fatal, making doctors and hospitals naturally cautious about adopting experimental AI systems. 
But beyond these safety concerns, the challenge isn't always finding better technology, but making any technology work within existing systems. Medical workflows are notoriously complex, involving multiple specialists, regulatory requirements, and patient safety protocols that can take years to navigate.
“There can be really good uses for the new big tech, but often it's not the tech that was the limiter,” Foy said. “The limiter was how do you integrate that in a really complex system where people are making complex choices?”
The same integration challenges exist in industries where getting it wrong can be catastrophic. In aerospace, where rocket engines either work perfectly or explode, Dubai-based startup Leap 71 has built an AI system that operates deterministically based on embedded physics laws rather than making probabilistic guesses like generative AI.
"They actually encapsulate the logic of how to build a rocket engine," said Lin Kayser, Leap 71's cofounder. "We use generative AI to read papers and summarize information for us. But otherwise, we do not need it in creating rockets right now."
This approach has already produced working rocket engines. In 2024, Noyron designed a small rocket engine that was 3D printed in one piece and successfully test-fired. In January 2025, the company designed an even more complex engine type. The company now aims to scale up to engines that could compete with SpaceX's Raptor engine by 2029.
The success has brought Leap 71 into the broader conversation about AI's potential, even though its approach differs fundamentally from the generative models. The hype can be a distraction, something Kayser has observed in 30 years as an entrepreneur.
“I don't mind the publicity because it sends customers our way," he said. "But at the same time, we're just trying to be diligent about building rockets.”
At the institutional level, the picture looks different. Universities maintain space for both experimental generative AI research and proven traditional approaches to coexist. Researchers can pursue riskier, unproven ideas while VCs need market-ready solutions that can generate returns, according to Vanessa Parli, the director of research at Stanford's Human-Centered AI Institute and a member of the AI Index steering committee.
The appeal of generative AI has spread far beyond computer science departments into fields such as biology, psychology, and environmental science. The technology's user-friendly interface makes it attractive to researchers without a deep AI background. But in areas such as sustainability, researchers keep working on simpler methods that often work better due to smaller datasets and specific technical requirements.
"[Generative AI] is not the only type of AI being developed," Parli said. Beyond building tools for specific fields, Stanford researchers are pursuing something bigger. At Stanford, she sees researchers continuing to work on approaches that could represent the next wave of breakthroughs.
The challenge isn't that this research will disappear. It's that traditional AI researchers are competing against a well-funded marketing machine that dominates headlines and pitch meetings. But if generative AI hits obstacles or fails to deliver on its promises, these researchers could be positioned to step in if they borrowed some promotional tactics from the generative AI world.
"The folks who are working on non-genAI may want to consider help with PR and marketing," Paril said.

source

Jesse
https://playwithchatgtp.com