Curated by THEOUTPOST
On Fri, 24 Jan, 4:03 PM UTC
3 Sources
[1]
Anthropic adds "Citations" in bid to avoid confabulating AI models
On Thursday, Anthropic announced Citations, a new API feature that helps Claude models avoid confabulations (also called hallucinations) by linking their responses directly to source documents. The feature lets developers add documents to Claude's context window, enabling the model to automatically cite specific passages it uses to generate answers. "When Citations is enabled, the API processes user-provided source documents (PDF documents and plaintext files) by chunking them into sentences," Anthropic says. "These chunked sentences, along with user-provided context, are then passed to the model with the user's query." The company describes several potential use cases for Citations, including summarizing case files with source-linked key points, answering questions across financial documents with traced references, and powering support systems that cite specific product documentation. In its own internal testing, the company says that the feature improved recall accuracy by up to 15 percent compared to custom citation implementations created by users within prompts. While a 15 percent improvement in accurate recall doesn't sound like much, the new feature still attracted interest from AI researchers like Simon Willison because of its fundamental integration of Retrieval Augmented Generation (RAG) techniques. In a detailed post on his blog, Willison explained why citation features are important. "The core of the Retrieval Augmented Generation (RAG) pattern is to take a user's question, retrieve portions of documents that might be relevant to that question and then answer the question by including those text fragments in the context provided to the LLM," he writes. "This usually works well, but there is still a risk that the model may answer based on other information from its training data (sometimes OK) or hallucinate entirely incorrect details (definitely bad)."
[2]
Anthropic's new Citations feature aims to reduce AI errors | TechCrunch
In an announcement perhaps timed to divert attention away from OpenAI's Operator, Anthropic Thursday unveiled a new feature for its developer API called Citations, which lets devs "ground" answers from its Claude family of AI in source documents such as emails. Anthropic says Citations allows its AI models to provide detailed references to "the exact sentences and passages" from docs they use to generate responses. As of Thursday afternoon, Citations is available in both Anthropic's API and Google's Vertex AI platform. As Anthropic explains in a blog post with Citations, devs can add source files to have models automatically cite claims that they inferred from those files. Citations is particularly useful in document summarization, Q&A, and customer support applications, Anthropic says, where the feature can nudge models to insert source citations. Citations isn't available for all of Anthropic's models -- only Claude 3.5 Sonnet and Claude 3.5 Haiku. Also, the feature isn't free. Anthropic notes that Citations may incur charges depending on the length and number of the source documents. Based on Anthropic's standard API pricing, which Citations uses, a roughly-100-page source doc would cost around $0.30 with Claude 3.5 Sonnet, or $0.08 with Claude 3.5 Haiku. That may well be worth it for devs looking to cut down on hallucinations and other AI-induced errors.
[3]
Anthropic's New Feature Will Make Claude's Responses More Reliable
Citations will enable developers to create document summarisation tools Anthropic introduced a new application programming interface (API) feature on Thursday to let developers ground the responses generated by artificial intelligence (AI) models. Dubbed Citations, the feature allows developers to restrict the output generation of the Claude family of AI models to source documents. This is aimed at improving the reliability and accuracy of the AI-generated responses. The AI firm has already provided the feature to companies such as Thomson Reuters (for the CoCounsel platform) and Endex. Notably, the feature is available without any additional cost. Generative AI models are typically prone to errors and hallucination. This occurs because of the massive datasets they have to look through to find responses to user queries. Adding web searches to the equation only makes it trickier for large language models (LLMs) to avoid inaccurate information as they use relatively basic retrieval-augmented generation (RAG) mechanisms. AI companies, which also build specialised tools, often restrict data access to the LLMs to improve accuracy and reliability. Some examples of such tools include Gemini in Google Docs, AI-powered Writing Assist tools in Samsung and Apple smartphones, and PDF analysis tools in Adobe Acrobat. However, creating such a layer is not possible in API as developers build a wide range of tools that have different data requirements. To solve this problem, Anthropic introduced the Citations feature for its API. Detailed in a newsroom post, the feature lets Claude ground its responses in source documents. This means Claude AI models can provide detailed references to the exact paragraph and sentences it took the information to generate the output. The AI firm claims this tool will make the AI-generated responses easily verifiable and more trustworthy. With this, users can add source documents to the context window, and Claude will automatically cite the source in its output wherever it infers them from the source material. As a result, developers will not have to rely upon complex prompts to ask Claude to include source information, which the company acknowledged as an inconsistent and cumbersome method. Anthropic claimed that with Citations, developers will be able to easily build AI solutions for document summarisation, tools to answer complex queries based on long documents, as well as customer support systems. Notably, the company stated that Citations uses Anthropic's standard token-based pricing model and users will not pay for output tokens that return the quoted text. However, there might be an extra charge for additional input tokens that are used to process the source documents. Citations is currently available for the new Claude 3.5 Sonnet and Claude 3.5 Haiku models.
Share
Share
Copy Link
Anthropic launches Citations, a new API feature for its Claude AI models, designed to improve response accuracy by grounding outputs in source documents and reducing hallucinations.
Anthropic, a leading AI company, has introduced a new API feature called Citations, aimed at improving the accuracy and reliability of its Claude family of AI models. Announced on Thursday, this feature allows developers to ground AI-generated responses in source documents, potentially reducing the occurrence of AI hallucinations and confabulations 1.
The Citations feature processes user-provided source documents by chunking them into sentences. These chunks, along with user-provided context, are then passed to the model along with the user's query. This approach enables Claude to automatically cite specific passages it uses to generate answers 1.
Developers can add source files to have models automatically cite claims inferred from those files. The feature is particularly useful in document summarization, Q&A, and customer support applications 2.
Citations is currently available for Claude 3.5 Sonnet and Claude 3.5 Haiku models through both Anthropic's API and Google's Vertex AI platform 2. The feature uses Anthropic's standard token-based pricing model, with no additional cost for output tokens returning quoted text. However, there might be extra charges for processing source documents 3.
Anthropic suggests several potential use cases for Citations, including:
The company claims that in internal testing, the feature improved recall accuracy by up to 15 percent compared to custom citation implementations created by users within prompts 1.
AI researchers, such as Simon Willison, have shown interest in the Citations feature due to its fundamental integration of Retrieval Augmented Generation (RAG) techniques. Willison explains that while RAG patterns generally work well, there is still a risk of models answering based on other information from their training data or hallucinating incorrect details 1.
The introduction of Citations addresses a common challenge in generative AI models, which are prone to errors and hallucinations due to the massive datasets they process. By restricting data access and providing verifiable sources, Anthropic aims to make AI-generated responses more trustworthy and reliable 3.
Reference
[3]
MIT CSAIL researchers have created ContextCite, a tool that identifies specific sources used by AI models to generate responses, improving content verification and trustworthiness.
2 Sources
2 Sources
Anthropic introduces a new 'computer use' feature in its Claude AI models, allowing them to interact with computer interfaces like humans. This development, along with model upgrades, positions Anthropic as a strong competitor to OpenAI in the AI industry.
3 Sources
3 Sources
Anthropic has launched new AI tools in its developer console, including a prompt improver that uses chain-of-thought reasoning to enhance prompt quality and improve output accuracy by up to 30%.
2 Sources
2 Sources
Anthropic has launched a significant update to its Claude AI platform, introducing team collaboration features and extended reasoning capabilities. The upgrade aims to democratize AI access and improve enterprise adoption.
2 Sources
2 Sources
Anthropic launches Claude 3.7 Sonnet, the first hybrid reasoning AI model, and Claude Code, an advanced coding assistant, marking significant advancements in AI technology for developers and researchers.
36 Sources
36 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved