AI chatbots for doctors are hot. But there’s plenty of value ‘down the stack’ too, argues Corti CEO – Fortune
Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.
Among the most interesting AI stories this week was an item about a Boston-area startup called OpenEvidence that uses generativeAI to provide answers to clinical questions based data from leading medical journals. The free-to-use app has proved enormously popular among doctors, with some surveys suggesting at least 40% of all U.S. physicians are using OpenEvidence to stay on top of the latest medical research and to ensure they are offering the most up-to-date treatments to patients. On the back of that kind of viral growth, OpenEvidence was able to raise $210 million in a venture capital deal in July that valued the company at $3.5 billion. OpenEvidence is also the same company that a few weeks back said that its AI system was able to score 100% on the U.S. medical licensing exam. (See the “Eye on AI Numbers” section of the August 21st edition of this newsletter.) All of which may explain why, just a month later, the company is reportedly in talks for another venture deal that would almost double that valuation to $6 billion. (That’s according to a story in tech publication The Information which cited three unnamed people it said had knowledge of the discussions.)
A lot of the use of OpenEvidence today would qualify as “shadow AI”—doctors are using it and finding value, but they aren’t necessarily admitting to their patients or employers that they are using it. They are also often using it outside enterprise-grade systems that are designed to provide higher-levels of security, data privacy, and compliance, and to integrate seamlessly with other business systems.
Ultimately, that could be a problem, according to Andreas Cleve, the cofouner and CEO of Corti, a Danish medical AI company that is increasingly finding traction as a company offering healthcare institutions “AI infrastructure” designed specifically for medical use cases. (Full disclosure: Corti’s customers include Wolters Kluwer, a huge software company that markets a clinical evidence engine called UpToDate that competes with OpenEvidence.)
AI infrastructure is a pivot for Corti, which was founded way back in 2013 and spent the first decade of its existence building its own speech recognition and language understanding systems for emergency services and hospitals. The company still markets its “Corti assistant” as a solution for healthcare systems that want an AI-power clinical scribe that can operate well in noisy hospital environments and integrate with electronic health records. But Cleve told me in a recent conversation that the company doesn’t see its future in selling a front-end solution to doctors, but instead selling key components in “the AI stack” to the companies that are offering front-end tools.
“We tried to be both a product vendor for healthcare and an infrastructure vendor, and that meant competing with all the other apps in healthcare, and it was like, terrible,” he says. Instead, Corti has decided its real value lies in providing the “healthcare grade” backend on which AI applications, many of them produced by third parties, run. The backend Corti provides includes medical AI models—which others can wrap user-facing products around—as well as the platform on which AI agents for healthcare use cases can run. For instance, it has built an API called FactsR, which is an AI reasoning model that is designed to check the facts that medical notetaking scribes or clinical AI systems produce. It uses a lot of tokens, Cleve says, which would make it too expensive for general purpose voice transcription. But because of how much is riding on clinical notes being accurate, it can be worth it to a vendor to pay for FactsR, Cleve says.
Another example: earlier this summer Corti announced a partnership with Voicepoint, a speech recognition and digital transcription service that is used by doctors across Switzerland, where Voicepoint will use Corti’s AI models to help with tasks such as summarization of conversations into medical notes and possibly, in the future, with diagnostic support. To do this though, Corti had to set up dedicated AI infrastructure, including data centers located in Switzerland, to comply with strict Swiss data residency rules. Now, Corti is able to offer this same backbone infrastructure to other healthcare companies that want to deploy AI solutions in Switzerland. And Corti has similar AI infrastructure in place in countries like Germany that also have strict data residency and data privacy rules.
Cleve tells me that healthcare is increasingly part of the discussions around “sovereign AI.” This is particularly true in Europe, where many governments are worried about having their citizens’ medical information stored on the servers of U.S. companies, which might be subject to U.S. government pressure, legal or otherwise, to provide data access. “None of these things are doable today, because the majority of all the AI apps are running on OpenAI, Anthropic, or Gemini, and they are all American companies over which America asserts jurisdiction,” Cleve says.
But even within the U.S., strict cybersecurity and patient privacy requirements often mean that using an off-the-shelf, general-purpose AI system won’t cut it. “A lot of customers have requirements like, ‘Hey, we will never want to have data leave premises, or we will never share a tenant, or we will never co-encrypt with our consumer customer on the GPU rack, because we want to know where our data is because we have to prove that to legislators,’” he says.
Cleve also tells me that he thinks the giant, general purpose AI builders—the likes of OpenAI, Anthropic, and Google—are unlikely to conquer healthcare, despite the fact that they have been making moves to build models either fine-tuned or specifically-trained to answer clinical questions. He says this is because healthcare isn’t a single vertical, but rather a collection of highly-specialized niches, most of which are too narrow to be interesting to these tech behemoths. The note-taking needs of a GP in a relatively quiet office who needs to summarize a 10-minute consultation are quite different from a doctor working in the chaos and noise of a busy city ER, which are different again from a psychiatrist who needs to summarize not just a 10-minute consultation, but maybe an hour-long therapy session. As an example, Cleve says another Corti customer is a company in Germany that makes software just to help dentists automate billing based on audio transcripts of their sessions with patients. “They’re a vertical within a vertical,” he says. “But they are growing like 100% a year and have done so for several years. But they are super niche.”
It will be interesting to watch Corti going forward. Perhaps Cleve is correct that the AI stack is wide enough, deep enough and varied enough to create opportunities for lots of different vertical and regional players. Or, it could be that OpenAI, Microsoft, and Google devour everyone else. Time will tell.
With that, here’s more AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
British lawmakers accuse Google DeepMind of ‘breach of trust’ over delayed Gemini 2.5 Pro safety report—by Beatrice Nolan
How the AI data center boom is breathing new life into dirty, old coal plants—by Jordan Blum
Forget the golden age of fraud, the billionaire investor who shorted Enron warns we might be in the ‘diamond or platinum level’ amid the AI boom—by Sasha Rogelberg
Nvidia’s China-based rival posts 4,300% revenue jump as chipmaker’s earnings reported no H20 chip sales to the country—by Nino Paoli
Microsoft unveils first frontier-level language models built in-house. The company said it has begun publicly testing MAI-1-preview, its first large foundation AI model built fully in-house, as well as MAI-Voice, a fast voice generation model that is small enough to run on a single GPU. The new models mark a significant step by Microsoft to reduce its reliance on OpenAI, despite remaining a major investor in the AI company. MAI-1-preview was trained with 15,000 Nvidia H100 chips, according to Microsoft, and will be rolled out for some Microsoft Copilot text tasks in the coming weeks. In the LMArena, a public benchmark for LLMs, the model currently ranks below rival models from Anthropic, OpenAI, Google, and others. You can read more from CNBC here.
OpenAI says it will add parental controls and additional safeguards to ChatGPT. The company said it would, within the next month, allow parents to link their accounts to those of their teenage children, giving them more control over the kids' interaction with ChatGPT. It also said that it would soon institute better safeguards in general, actively screening interactions for signs of emotional distress on the part of users and routing these conversations to its GPT-5 Thinking model, which the company says does a better job of adhering to guardrails meant to prevent the model from encouraging self-harm or delusional behavior. The moves come after a high-profile lawsuit accused the company's ChatGPT model of encouraging the suicide of a 16-year old teen and several other cases in which people allege ChatGPT encouraged suicide, self-harm, or violence. You can read more here from Axios.
Anthropic valued at $183 billion after $13 billion venture capital round. The AI company announced that it had raised $13 billion Series F funding round led by the venture capital firm Iconiq, and "co-led" by Fidelity and Lightspeed Venture Partners. A long list of other investors—including Goldman Sachs, BlackRock, Blackstone, Coatue, Jane Street, and T. Rowe Price, among others—also participated. The company, which was previously valued at $61.5 billion in a March funding round, said it now served more than 300,000 business customers and that its Claude Code coding model was currently generating at least $42 million in revenue each month. Anthropic's blog post announcing the funding round is here.
X.ai enters the battle for coders with new AI “agentic coding” model. The company debuted its first AI coding model, called grok-code-fast-1, which is supposed to be both “speedy and economical,” according to the company. It is being made available for free for a limited time and is both available on GitHub and through coding “wrappers” such as Cursor and Windsurf. The model is an indication that X.ai founder Elon Musk is serious about taking on OpenAI, Anthropic, and Google DeepMind across the entire range of AI applications, not just in the consumer chatbot space. It also shows just how intense the competition to capture marketshare among software developers is becoming. You can read more from Reuters here.
X.ai sues former engineer for allegedly taking trade secrets to OpenAI. Elon Musk’s AI startup x.AI has filed suit against former engineer Xuechen Li, alleging he stole trade secrets about its Grok chatbot and took them to his new job at OpenAI, Reuters reported. Li was not immediately available to respond to the allegations. The lawsuit follows other legal battles Musk has launched against OpenAI and Apple, accusing them of monopolistic practices and straying from OpenAI’s original mission.
Anthropic says it will begin training AI models on user chats unless they opt out. The move is a major shift in data privacy policies for the AI model maker, which also said it was extending the length of time it retained user data to five years. Users can opt out by September 28. Some users and data privacy experts criticized the decision, noting that the design of Anthropic’s “Accept” button could cause many to agree without noticing the toggle that controls data sharing. Others speculated that Anthropic was making the move because it is running out of other ways to obtain enough data to train models that will compete with OpenAI and Google. Enterprise users of Anthropic’s Claude model, as well as government, education, or API users, are not affected. Anthropic says it filters sensitive information and does not sell user data to third parties. You can read more from The Verge here.
More evidence emerges that AI may be leading to job losses. Last week in Eye on AI, I covered research from economists at Stanford University whose research indicated that AI was leading to job losses, particularly among entry level employees, in professions that were highly-exposed to AI. This week, more evidence emerges from another well-designed study, this one carried out by the economists at the Federal Reserve Bank of St. Louis. Although this study did not look at whether younger and older workers were affected differently, it did examine the correlation between occupations that had adopted AI most intensively and job losses and found a distinct correlation. The impacts were greatest in occupations that used mathematics and computing intensively, such as software development, and much less in blue collar work and fields such as healthcare that were less prone to being automated with AI. You can read the St. Louis Fed study here.
Both the Stanford and St. Louis Fed research suggest that job losses from the implementation of AI are likely to be concentrated in some sectors, and not economy-wide. That said, as good as these studies are, I still think both fail to disentangle the effects of AI from the possible effects of the unwinding of the tech hiring boom that took place during the COVID-19 pandemic. During the pandemic, many large companies bulked up their software development and IT departments. Major tech firms such as Google, Meta, and Microsoft hired tens of thousands of new employees, sometimes hiring people before there was even any work for them to do just in order to prevent rivals from snapping up the same coders. Then, when the pandemic ended and it was clear that some ideas, such as Meta’s pivot to the metaverse, were not going to pan out, these same companies laid off tens of thousands of workers. I don’t think I’ve seen research yet that can separate the technology sector’s shedding of jobs created during the pandemic from the impact of AI. But I am sure someone is working on it and when they crack that nut, we’ll definitely report it here.
Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah. Apply to attend here.
Oct. 6-10: World AI Week, Amsterdam
Oct. 21-22: TedAI San Francisco.
Dec. 2-7: NeurIPS, San Diego
Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.
Preventing 'parasocial' chatbots.’ It’s increasingly clear that chatbots can encourage ‘parasocial’ relationships, where a user develops a harmful emotional attachment to the chatbot or the chatbot encourages the user to engage in self-harm of some kind. The parents of several teenagers who took their own lives after conversations with chatbots are now suing AI companies, saying they did not do enough to prevent chatbots from encouraging self-harm. And, short of suicides, there is other mounting evidence of people developing harmful chatbot dependencies.
Well, a new benchmark from researchers at Hugging Face, called INTIMA (Interactions and Machine Attachment Benchmark), aims to evaluate LLMs on their “companionship-seeking” behavior. The benchmark looks at 31 distinct behaviors across four different categories of interaction and 368 prompts. Testing Gemma-3, Phi-4, o3-mini, and Claude-4, researchers found that models more often reinforced companionship than maintained boundaries, though they varied: for instance, Claude was more likely to resist personification, while Gemma reinforced intimacy. You can read the Hugging Face paper here.
At the same time, researchers from Aligned AI, an Oxford startup I’ve covered before, published research showing that one LLM can be used to successfully screen the outputs of another LLM for parasocial behavior and then prompt that chatbot to steer the conversation in a less harmful direction. Aligned AI wanted to show that major AI model producers could implement such systems simply, if they wished (but that they were too often choosing to “optimize for user engagement” instead.) You can read more from Aligned AI’s blog here.
© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.