Amazon bets $4 billion on Anthropic's Claude, the chatbot platform rivaling ChatGPT and Google's Bard – VentureBeat
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
Amazon said Monday it will invest up to $4 billion in Anthropic, the company that has built the powerful chatbot, Claude. Claude had emerged as one of the leading competitors against Open AI’s ChatGPT and Google’s Bard, in the race to dominate generative AI.
Claude has not had such significant backing until now but arguably has needed it because of the massive expense of remaining competitive in building the large language models (LLMs) technology that drives such chatbots.
The investment was part of a significant partnership announced by the two companies, where Anthropic agreed to use Amazon’s cloud platform for “mission-critical workloads” in return for the investment. The backing is the first major connection by Amazon to a leading chatbot, at a time when cloud competitors Microsoft and Google have already bet big on their respective chatbot platforms.
Indeed, the investment by Amazon goes against statements it has made in recent months about wanting to be agnostic with LLM companies. Although it’s possible that Amazon had been hankering to make a big move the entire time and was taking the agnostic position as a way to justify its relative slowness to make such a bet.
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
Also, it’s true that Amazon continues to have multiple horses in the LLM race, so this investment may be more to diversify its efforts, and also to ensure access to technology, talent and insight. With its announcement last week of Alexa LLM, Amazon is entering the race for commercial, closed-source models in parallel with its business of providing a platform for serving generative models (called Bedrock).
Microsoft has invested more than $10 billion in Open AI, to secure exclusive rights to offer up OpenAI’s chatbot technology within its own cloud services. This includes having OpenAI prioritize Microsoft’s Azure cloud platform. Meanwhile, Google has pushed its own Bard chatbot, and Meta has invested in its Llama platform, which it has open-sourced, so that other companies can use Llama’s foundation LLM technology. (Though Google invested $300 million in Anthropic in February and Anthropic chose Google Cloud Platform as its preferred cloud platform at the time).
It’s a major injection of support for Claude, when cash is extremely important to fund the expensive work of training competitive LLMs, which are using hundreds of billions of parameters and require massive computing needs. Claude had only raised about $2.7 billion to date.
Amazon and Anthropic said the new strategic collaboration will pool their technologies and expertise “in safer AI,” and will accelerate Anthropic’s development of foundation LLMs to make them widely accessible to AWS customers. One of Anthropic’s major selling points is that AI can be dangerous if not given the proper safeguards. Anthropic has invested heavily to ensure that its Claude chatbot foundation model abides by specific ground rules in producing ethical outputs, rooted in the principles of what it calls Constitutional AI. Anthropic has tried to open a perception gap here in comparisons with Open AI’s ChaptGPT, which is not as strict.
Here are the key elements of the agreement:
VentureBeat recently published a story comparing Claude’s new professional version with ChatGPT’s pro version, and one clear distinction made was Claude’s ability to summarize content at a superior level — thanks to its industry-leading 100,000 token context window.
In its statement, Amazon said the investment expands is overall generative AI offering at “all three layers of the generative AI stack:”
“At the bottom layer, AWS continues to offer compute instances from NVIDIA as well as AWS’s own custom silicon chips, AWS Trainium for AI training and AWS Inferentia for AI inference,” the company said. “At the middle layer, AWS is focused on providing customers with the broadest selection of foundation models from multiple leading providers where customers can then customize those models, keep their own data private and secure, and seamlessly integrate with the rest of their AWS workloads—all of this is offered through AWS’s new service, Amazon Bedrock.” Today’s announcement falls into this middle later, the company said, by giving customers access to Anthropic’s customizable models, and allowing the to use their own proprietary data to create their own private models, and use fine-tuning capabilities via a self-service feature within Amazon Bedrock.
Finally, at the top layer, AWS offers “generative AI applications and services for customers like Amazon CodeWhisperer, a powerful AI-powered coding companion, which recommends code snippets directly in the code editor, accelerating developer productivity as they code,” the company said.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Join us for key insights and networking with leaders in the Data and AI spaces at VB’s exclusive after hours event this November!
© 2023 VentureBeat. All rights reserved.