Explainable AI to drive LLM observability investments to 50% of GenAI deployments by 2028

2 Sources

Share

Gartner predicts that explainable AI (XAI) will drive LLM observability investments to 50% of GenAI deployments by 2028, up from 15% today. As enterprises scale generative AI, the trust requirement grows faster than the technology itself, making XAI and observability critical for ensuring accuracy, fairness, and transparency in AI-generated outputs.

Explainable AI Emerges as Critical Foundation for Secure Generative AI Deployment

Gartner, Inc. has issued a significant forecast that reveals how establishing trust in AI will reshape enterprise technology investments over the next four years. By 2028, the growing importance of Explainable AI (XAI) will drive LLM observability investments to 50% of Generative AI (GenAI) deployments, a substantial increase from just 15% today

2

. This shift reflects a fundamental recognition that as enterprises scale GenAI initiatives, the trust requirement grows faster than the technology itself.

Source: CXOToday

Source: CXOToday

Understanding XAI and LLM Observability Solutions

Gartner defines Explainable AI (XAI) as a set of capabilities that describes a model, highlights its strengths and weaknesses, predicts its likely behavior and identifies any potential biases

1

. It can clarify an AI model's functioning to a specific audience to enable accuracy, fairness, accountability, stability and transparency in algorithmic decision making. Meanwhile, LLM observability solutions monitor, analyze and provide actionable insights into the behavior and performance of large language models, going beyond standard IT operations measurements such as response times to examine specific metrics including hallucinations, bias and token utilization

2

.

Source: DT

Source: DT

The Trust Gap Limiting GenAI Business Value

"As enterprises scale GenAI, the trust requirement grows faster than the technology itself," said Pankaj Prasad, Sr Principal Analyst at Gartner

2

. XAI provides visibility into why a model responded a certain way, while monitoring and analysis of large language models validates how that response was generated and whether it can be relied on. Without robust XAI and observability foundations, GenAI initiatives will be restricted to low risk, internal, or noncritical tasks where output verification is easily managed or inconsequential, severely limiting the potential return on investment.

Market Growth Demands New Quality Measures

Gartner forecasts the global GenAI models market will exceed $25 billion in 2026 and reach $75 billion by 2029, driven by rapid adoption across industries

2

. As usage increases, so does the need for mechanisms that verify AI-generated content and protect against hallucinations, factual inaccuracies and biased reasoning. Traditional observability focused on speed and cost, but the priority is now moving toward deeper quality measures such as factual accuracy, logical correctness and sycophancy, requiring new governance-focused evaluation metrics and methods, such as human-in-the-loop validation of generated content's narrative and citation accuracy.

Building Multi-Layered Strategy for Ensuring Accuracy and Fairness

These tools are used by teams that develop and operationalize AI systems, and increasingly by IT operations and SREs responsible for the performance and resilience of these systems in production

1

. To maximize reliability and business value, organizations must adopt a rigorous strategy centered on transparency and performance. This begins with mandating verifiable XAI tracing for high-impact use cases to document reasoning and data sources, alongside deployment of multidimensional observability platforms that track everything from latency and drift to cost and output quality

2

. Teams should integrate automated evaluation metrics such as safety checks and accuracy benchmarks directly into CI/CD pipelines to ensure continuous validation. Fostering cross-functional alignment by educating legal and compliance stakeholders on explainability requirements will help organizations navigate governance hurdles and ensure secure, high-performing AI deployment.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo