Anthropic set out to be "different" from OpenAI by creating a "helpful, honest, and harmless" AI dubbed Claude. Now it's releasing Claude 2.1, which is more powerful and less likely to hallucinate made-up information.
If you've ever used ChatGPT, you may have encountered an issue entering text into the prompt: it can only handle around 4,000 characters. If you want it examine a novel you're working on, that's nowhere near enough. Claude can now handle far more: 200,000 tokens, to be exact, provided you subscribe to Claude Pro, which is $20 a month.
Tokens are how Large Language Models (LLMs) organize information. When you enter text into the prompt, those words and punctuation are broken into chunks and then processed in the model's context window. That context window limits how much ChatGPT, Claude, and other LLMs can handle. Each LLM breaks down tokens a little differently, and Anthropic says that in Claude's case, 200,000 tokens is about 150,00 words.
For reference, that means you could theoretically feed all of J.R.R. Tolkien's The Return of the King and still have room left over. Just don't expect Claude to examine all of that text instantly. While feeding a short prompt is near instantaneous, novel-length prompts may take a few minutes to process.
Loading whole novels to Claude 2.1 may be a bad idea, though, considering the multiple lawsuits OpenAI (and now Microsoft) are facing over using copyrighted content without permission.
The company also says Claude 2.1 will hallucinate about two times less than the previous version. That's a big deal, as LLMs like Claude and ChatGPT tend to make up facts (hallucinate). This has led to some AIs like ChatGPT to falsely claim people have pled guilty to crimes or to cite fake cases in queries that were submitted by lawyers in legal briefings.