2 Sources
2 Sources
[1]
Qlik's most important AI feature is knowing when to say nothing . Boring is brilliant
There's a phrase that's stuck with me since a conversation with Martin Tombs, VP Global Go-to-Market for Analytics and Field CTO EMEA at Qlik, back in February: "boring is brilliant." He used it to describe the unglamorous but essential work of data governance. I deployed it back at him in the context of observability log files and we laughed at the irony. But given the state of enterprise AI right now, it's also genuinely useful shorthand for where the industry needs to go - and what separates the vendors building for production from the ones still selling demos. Tombs was speaking ahead of Qlik's announcement of the general availability of its agentic experience in Qlik Cloud, delivered through Qlik Answers as a unified conversational interface, alongside the GA of its Model Context Protocol (MCP) server. This week at Qlik Connect, a new ServiceNow partnership completes the picture. Taken together, they form a fuller architecture story - one worth unpacking carefully, because the market is drowning in agentic announcements that don't survive contact with production environments. Ask Tombs where enterprise AI deployments actually fail and his answer is that it's not the model - not the interface. He explains: Getting your unstructured data right - I think everyone's still getting their heads around this. It's not just where you store a PDF. It's what's in the PDF, who's responsible for that content in the PDF. This tracks against what diginomica has been hearing across organizations ranging from Fortune 100 to Fortune 500. And if you're building agentic systems that need to reason across both, the governance challenge compounds. You cannot bolt it on afterward, as Tombs puts it, because the agent has to make decisions about what to trust, what to surface, and - critically - what to decline to answer. One of the more honest design decisions Qlik has built into Answers is a hard boundary: ask it something outside its governed dataset and it will not hallucinate a response. It tells you it doesn't know. Tombs illustrates this with a deliberately absurd example - training a finance instance and then asking how to peel a banana - but the principle lands in production environments where a confidently wrong answer is categorically worse than silence. Tombs explains: If I give you three wrong answers, you're going to be out very quickly in asking me questions. And that's really how I see the adoption of any vendor's product. Gartner's hype cycle framing, which Tombs refers to, describes vendors still climbing the peak while enterprise consumers have started the descent into disillusionment. The gap between what AI delivers in a controlled demo and what it reliably does in a messy production environment remains significant - and it follows that this is a useful lens for evaluating everything Qlik has announced. Qlik's Model Context Protocol (MCP) server deserves a little more scrutiny. Originally developed by Anthropic, MCP provides a standardized way for AI assistants to discover and invoke external tools and data sources. Qlik's implementation exposes its analytics engine, tools, and governed data products to third-party AI assistants including Claude. Tombs uses a door analogy that earns its keep. If Qlik's internal Answers capability is the front door of the house, MCP is the side door - the one you open to let external agents access what you've built. But you need a bouncer on that door. He elaborates: By opening this front door, you've always got to have a bouncer on the door that says, 'What are you coming in for? What are you doing?' You've got to hand a menu to the MCP of what you are capable of doing, what your uniqueness is - and then other things can take advantage of that. The governance layer has to come before the MCP exposure. The practical implication is that an organization that has done its data governance homework can make that trusted intelligence available to whatever AI assistant its teams use - without re-exposing raw data or bypassing established controls. The distinction from a conventional API is also worth making explicit for those new to MCP: it standardizes not just the call but the capability discovery, so an external agent understands what a tool does before deciding whether to invoke it. That matters for multi-agent orchestration, where agents select tools dynamically rather than following hard-coded instructions. The capability Tombs is most visibly animated about is Discovery Agent - Qlik's continuously monitoring agent that surfaces anomalies, shifts, and emerging risks in key measures, without a human having to go hunting for them. He notes: We can proactively identify anomalies, trends, and risk. We could tell decision-makers that without me finding it all for them. Qlik CEO Mike Capone is pointed about what enterprise boards are actually wrestling with right now: they are navigating geo-political volatility, tightening AI regulation, and relentless cost pressure - all of which changes what enterprise AI has to be: auditable, governable, and capable of acting inside real workflows. Discovery Agent is the operational result of that positioning. The counterpoint is that automated anomaly detection is only as reliable as the model's understanding of what "normal" looks like for a given business - a data quality and contextual calibration problem that no GA announcement resolves. Tombs is candid about this: Qlik will get some things right and some things wrong in deployment, and the product is in constant iteration. The Qlik Connect announcement adds a dimension that contextualizes the rest. The new ServiceNow partnership routes Qlik analytics into ServiceNow workflows and agents, while adding Qlik metadata collectors to the ServiceNow Data Catalog for discovery and lineage visibility. An organization can have the best governed analytics layer available, but if the insight never reaches the person or agent making the operational decision, it's academic. ServiceNow is where a significant volume of enterprise work execution happens - and getting Qlik's analytics engine, aggregating cross-system context from ERP, CRM, supply chain, billing, and support data, feeding into that environment is a substantive architectural move rather than a badge-swap partnership. Pramod Mahadevan, VP of Data and Analytics Product Ecosystem at ServiceNow, says: The decisions people and agents make every day are only as good as the data behind them. That's been true for thirty years. What's changed is the plumbing: the data layer, analytics layer, and workflow layer are now connectable in ways that no longer require custom integration work at every junction. The metadata collector piece reinforces this - lineage, discovery, and structure visibility for Qlik-managed assets become accessible from within ServiceNow's own governance tooling, which is a more practically grounded integration than most partnership announcements in this space manage to deliver. The combination of a governed data product layer, a reasoning engine that knows when not to answer, an MCP interface that extends trusted intelligence to external assistants, continuous monitoring via Discovery Agent, and now a workflow integration with ServiceNow represents a genuine end-to-end architecture - not a feature announcement dressed up as a strategy. Those honest caveats remain significant, though. Deployment quality varies, unstructured data governance is hard regardless of tooling, and the cost of running agentic systems at enterprise scale is underexamined - Tombs raises cost governance explicitly, and he's right to. These are not problems Qlik's announcements solve; they're problems that good tooling makes more tractable. The underlying positioning - govern first, agent second, trust as infrastructure - is correct. The organizations that internalize it will build AI systems that actually work in production. The ones that skip it will keep discovering, at significant expense, why promises don't match up to reality. 'Boring is brilliant' needs to be read as an engineering principle, not a marketing line.
[2]
Governed data is AI's real competitive edge, says Qlik - SiliconANGLE
It's not governance slowing down enterprise AI -- it's the lack of it, says Qlik executive Enterprises are chasing AI models at a dizzying pace, yet the organizations pulling ahead are the ones that paused to build something less glamorous: a trusted, governed data foundation. Pressure is mounting across industries, as research from Qlik and Enterprise Technology Research shows that data quality, availability and governance remain the top blockers to scaling agentic AI deployments. But a data strategy anchored in governance is not a constraint on AI momentum -- it is the prerequisite for it, according to James Fisher (pictured), chief strategy officer at Qlik Technologies Inc. "We've seen this big shift from trying to just apply AI to a problem to thinking about what is needed architecturally to bring that data together -- at the right latency, in the right format -- and deliver that in a way that it's consumable to AI applications," Fisher told theCUBE. "I think that's a really positive shift that we've seen over the last couple of years." Fisher spoke with theCUBE's Rebecca Knight and Rob Strechay at Qlik Connect 2026, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed AI architecture decisions, governed data as an accelerator and Qlik's product roadmap for agentic analytics. (* Disclosure below.) The urgency reflects a broader industry reckoning. Analysts warn that enterprise AI will stall not because models aren't ready, but because governance isn't keeping pace. Governance slowing AI delivery might still be a common chief information officer objection, but it's not necessarily the right one, according to Fisher. "I'm a big fan of the phrase 'go slower to go faster,'" he said. "By creating and taking the time to build that foundation -- to think about where it's gonna be used, how it's gonna be applied -- just that little step, that little extra time you take there will provide exponential benefits long term, whether that's performance of the AI application [or] whether that's performance of the agent." That compounding logic also applies to data products -- reusable, governed datasets built around specific consumer needs. Solving one use case with a well-structured data product tends to unlock the next, creating organizational momentum, Fisher noted. Qlik Answers is built on exactly that premise, pairing governed data products with a conversational AI interface so that decisions carry citations and explanations users can trust. "While we're all worrying about data infrastructures and building agents and the cost of deployment, I think it's always important we understand about the user, about the individual that's working with it," Fisher said. "We need to not only democratize access to AI, but democratize the value that can come from it." Stay tuned for the complete video interview, part of SiliconANGLE's and theCUBE's coverage of Qlik Connect 2026.
Share
Share
Copy Link
Qlik is positioning data governance as the critical foundation for enterprise AI success rather than a barrier. At Qlik Connect 2026, executives revealed how Qlik Answers and governed data products enable organizations to deploy agentic AI that delivers trusted decisions. Research shows data quality and governance remain the top blockers to scaling AI deployments.
Qlik is making a contrarian argument in the enterprise AI race: the organizations winning are not the ones chasing the latest AI models, but those building a trusted, governed data foundation first
2
. At Qlik Connect 2026, the analytics platform announced general availability of its agentic experience in Qlik Cloud, delivered through Qlik Answers as a unified conversational AI interface, alongside a new ServiceNow partnership that completes its architecture story1
.Research from Qlik and Enterprise Technology Research reveals that data quality, availability, and data governance remain the top blockers to scaling agentic AI deployments across industries
2
. Yet the conventional wisdom that governance slows AI delivery misses the point entirely, according to James Fisher, chief strategy officer at Qlik. "I'm a big fan of the phrase 'go slower to go faster,'" Fisher told theCUBE. "By creating and taking the time to build that foundation -- to think about where it's gonna be used, how it's gonna be applied -- just that little step, that little extra time you take there will provide exponential benefits long term"2
.
Source: SiliconANGLE
One of the most honest design decisions Qlik has built into Qlik Answers is a hard boundary: ask it something outside its governed dataset and it will not hallucinate a response. It simply tells you it doesn't know
1
. Martin Tombs, VP Global Go-to-Market for Analytics and Field CTO EMEA at Qlik, used a deliberately absurd example to illustrate the principle -- training a finance instance and then asking how to peel a banana -- but the logic lands hard in production environments where a confidently wrong answer is categorically worse than silence. "If I give you three wrong answers, you're going to be out very quickly in asking me questions. And that's really how I see the adoption of any vendor's product," Tombs explained1
.
Source: diginomica
The compounding logic of governed data also applies to data products -- reusable, governed datasets built around specific consumer needs. Solving one use case with a well-structured data product tends to unlock the next, creating organizational momentum, Fisher noted
2
. Qlik Answers is built on exactly that premise, pairing data products with a conversational interface so that trusted decisions carry citations and explanations users can verify.Fisher emphasized a shift in enterprise thinking: "We've seen this big shift from trying to just apply AI to a problem to thinking about what is needed architecturally to bring that data together -- at the right latency, in the right format -- and deliver that in a way that it's consumable to AI applications"
2
. This architectural approach separates vendors building for production from those still selling demos, particularly as the gap between what AI models deliver in controlled environments versus messy production settings remains significant1
.Related Stories
Qlik's Model Context Protocol (MCP) server, now generally available, exposes its analytics engine, tools, and governed data products to third-party AI assistants including Claude
1
. Originally developed by Anthropic, MCP provides a standardized way for AI applications to discover and invoke external tools and data sources. Tombs used a door analogy: if Qlik Answers is the front door, MCP is the side door that lets external agents access what you've built -- but only with proper governance acting as the bouncer1
.The practical implication is that organizations that have done their data foundation homework can make trusted intelligence available to whatever AI assistant their teams use, without re-exposing raw data or bypassing established controls. The distinction from a conventional API is that MCP standardizes not just the call but the capability discovery, so an external agent understands what a tool does before deciding whether to invoke it -- critical for multi-agent orchestration
1
.The capability Tombs highlighted with particular enthusiasm is Discovery Agent -- Qlik's continuously monitoring agent that surfaces anomaly detection, shifts, and emerging risks in key measures without requiring manual analysis. "We can proactively identify anomalies, trends, and risk. We could tell decision-makers that without me finding it all for them," Tombs noted
1
. This proactive approach addresses a fundamental challenge in enterprise AI: moving from reactive querying to intelligent alerting that anticipates what decision-makers need to know.Summarized by
Navi
17 May 2025•Technology

17 Sept 2025•Technology

20 Jan 2026•Policy and Regulation
