Anthropic's “friendly” AI chatbot, Claude, is now available for more … – The Verge

By Emma Roth, a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.
Claude, the AI chatbot that Anthropic bills as easier to talk to, is finally available for more people to try. The company has announced that everyone in the US and UK can test out the new version of its conversational bot, Claude 2, from its website.
Its public availability allows Claude to join the ranks of ChatGPT, Bing, and Bard, all of which are available to users across numerous countries. That means we all have one more AI chatbot to play around with, but Anthropic says to “think of Claude as a friendly, enthusiastic colleague or personal assistant who can be instructed in natural language to help you with many tasks.”
Claude, which Anthropic also describes as “helpful, harmless, and honest,” can do things like create summaries, write code, translate text, and more. While this may sound a lot like Google’s Bard or Microsoft’s Bing chatbot, Anthropic says it’s built differently than those bots. It has a more conversational tone than its counterparts — and supposedly even has a sense of humor. (I’ll have to test that out for myself.) It’s also guided by a set of principles, called a “constitution,” that it uses to revise its responses by itself instead of relying on human moderators.
While the Google-backed Anthropic initially launched Claude in March, the chatbot was only available to businesses by request or as an app in Slack. With Claude 2, Anthropic is building upon the chatbot’s existing capabilities with a number of improvements. In addition to the ability to craft longer responses, Claude 2 is also slightly more skilled in math, coding, and reasoning when compared to the previous Claude model.
As an example, Anthropic says Claude 2 scored a 76.5 percent on the multiple choice section of the bar exam, while the older Claude 1.3 got a 73 percent. Claude 2 is also two times better at “giving harmless responses,” according to Anthropic. That means it should be less likely to spit out harmful content when you’re interacting with it when compared to the previous model, although Anthropic doesn’t rule out the possibility of jailbreaking.
Unlike Bard and Bing, however, Claude 2 still isn’t connected to the internet and is trained on data up to December 2022. While that means it can’t surface up-to-the-minute information on current events (it doesn’t even know what Threads is!), its dataset is still more recent than the one that the free version of ChatGPT uses. (ChatGPT’s knowledge cuts off after 2021.) Sandy Banerjee, a representative for Anthropic, tells The Verge you can still feed Claude a recently published website or webpage, and it should be able to field queries about it.
Additionally, Anthropic recently expanded Claude’s context window to around 75,000 words. That means you can upload dozens of pages to the bot, or even an entire novel, for the bot to parse. So if you need a quick summary of a complicated and very long research paper, Claude’s your bot. Other models have much smaller limits, with ChatGPT sitting at a maximum of around 3,000 words. Now that Anthropic is publicly available, I’m looking forward to giving this a try and seeing if a longer context window is enough to throw this “harmless” bot off the rails, as we saw with Bing.
/ Sign up for Verge Deals to get deals on products we’ve tested sent to your inbox daily.
The Verge is a vox media network
© 2023 Vox Media, LLC. All Rights Reserved