OpenAI rival Anthropic launched Claude 2.1 today. The latest version of the ChatGPT rival boosts its context window to 200,000 tokens, allowing you to paste the entirety of Homer’s The Odyssey for AI analysis. (Tokens are chunks of text it uses to organize information, and a context window is the set limit of tokens it can parse in a single request.) The company said version 2.1 also halves Claude’s hallucination rate, leading to fewer erroneous answers (like those the ChatGPT lawyer trusted far too much). Coincidentally or not, the update arrives as the tech world watches Anthropic’s rival OpenAI descending into pandemonium.
The company says Claude 2.1’s 200K-token context window allows users to upload entire codebases, academic papers, financial statements or long literary works. (Anthropic says 200,000 tokens translates roughly to 150,000 words or over 500 pages of material.) After uploading the material, the chatbot can provide summaries, answer specific questions about its content, compare / contrast multiple documents or recognize patterns humans may have a harder time seeing.
“Processing a 200K length message is a complex feat and an industry first,” the company wrote in an announcement blog post. “While we’re excited to get this powerful new capability into the hands of our users, tasks that would typically require hours of human effort to complete may take Claude a few minutes. We expect the latency to decrease substantially as the technology progresses.”
Anthropic warns that analyzing and responding to extremely long inputs could take the AI bot a few minutes to complete — significantly longer than the seconds we typically have to wait for simpler queries. “We expect the latency to decrease substantially as the technology progresses,” the company wrote.
Hallucinations, or confidently inaccurate information, are still prevalent in this generation of AI chatbots. However, Anthropic says Claude 2.1 has cut its hallucination rate in half compared to Claude 2.0. The company attributes some of the progress to an improved ability to separate incorrect claims from admissions of uncertainty, making Claude 2.1 about twice as likely to admit it doesn’t know an answer rather than providing a wrong one.
Anthropic says Claude 2.1 also commits 30 percent fewer errors in extremely long documents. In addition, it has a three to four times lower rate of “mistakenly concluding a document supports a particular claim” when using more robust context windows.
The updated bot adds a few perks specifically for developers, too. A new Workbench console allows devs to refine prompts “in a playground-style experience and access new model settings to optimize Claude’s behavior.” For example, it lets users test multiple prompts and tap into Claude’s codebase to generate snippets for SDKs. Another new developer beta feature, “tool use,” lets Claude “integrate with users’ existing processes, products, and APIs.” The company cites examples like using a calculator for complex equations, translating plain language to structured API calls, using a web search API, tapping into clients’ private APIs or connecting to product datasets. The company cautions that the tool use feature is in early development and urges customers to submit feedback.
Trending Products