There is no doubt that most enterprises are experimenting with AI on their data. Since the rise of generative AI, organizations have begun pointing large language models at their data warehouse in the hope of getting instant insights. However, as many organizations are discovering, the result is messy. Without proper semantic models, governance frameworks, and trust layers, these experiments often fail to deliver enterprise-grade results.
ThoughtSpot's latest product launch aims to bridge this gap, claiming that enterprises need a comprehensive analytics agent platform, not just point solutions or platform-enabled analytics, to support their broader AI strategies. ThoughtSpot has a long history of using natural language processing to support access to data insights. In fact, it's quite interesting that the rise of LLMs could have put the vendor in a tricky spot - given that general LLMs feel so similar to what ThoughtSpot is trying to do. However, it's becoming clear that it isn't working against enterprise ambitions with LLMs, but is rather aiming to provide a trustworthy solution that supports those ambitions.
With that in mind, the analytics vendor today announced a suite of four specialized agents - SpotterViz, SpotterModel, SpotterCode, and Spotter 3 - designed to cover the entire analytics workflow from data modeling through to business decision-making.
Its hope is that enterprises will consolidate around platforms that can provide trusted, governed analytics across their tech stack - a horizontal approach - rather than adopting fragmented AI capabilities embedded within individual business applications.
ThoughtSpot's announcement centers on the release of specialized agents for different personas. SpotterModel targets data engineers, automating the creation and maintenance of semantic models. SpotterViz focuses on data analysts, automating dashboard creation and layout. SpotterCode helps developers embedding analytics into applications. And Spotter 3, which diginomica covered in September, serves as the core intelligence engine that can work with both structured and unstructured data.
In an interview ahead of the launch, Francois Lopitaux, ThoughtSpot's SVP of Product Management, explained:
Now we are going to have a full platform with agents that are going to augment users so they can spend more time on basically thinking through the problem versus doing the work.
The agents work sequentially, with the output of one feeding into the next. SpotterModel creates a semantic model, which SpotterViz then uses to build Liveboards, which SpotterCode can embed into applications. This sequential flow hopes to address a practical concern about agent sprawl - the risk that multiple AI agents working in isolation could replicate the data silo problems that enterprises have spent years trying to solve. Replacing multiple dashboards that don't work together with multiple agents that don't work together shouldn't be the aim!
However, Lopitaux acknowledged that whilst the agents share a common data layer through ThoughtSpot's platform, they don't yet communicate with each other autonomously:
We don't yet have agents speaking to each other. I think that's a natural next step, if you think about it - but we don't have it yet. But the outcome of their work is really inside our platform and used by the other one, after the first one lets it go.
However, to me, the most interesting aspect of ThoughtSpot's launch this week isn't the number of agents - it's the architectural approach, where the vendor argues that enterprises need a proper semantic layer and analytics platform, rather than simply applying LLMs directly to their data (or relying on other platforms for 'good enough' analytics).
On what Lopitaux is seeing in the market, he said:
If you apply an LLM on your data, it's not going to work. And we see a lot of customers trying and failing...and then coming back to us.
The reason, he argues, comes down to business understanding and complexity. Large language models do really well at natural language interpretation but struggle with the realities of enterprise data - understanding how tables should join, managing row-level security, handling complex analytical queries, and ensuring accuracy. In enterprise environments with hundreds or thousands of tables, asking an LLM to figure out relationships and business logic on the fly doesn't work reliably.
And that's why ThoughtSpot's semantic layer is interesting. By defining business logic, relationships, and governance rules upfront in a semantic model, the vendor claims it can provide trusted, accurate answers without the hallucination risks that come with LLMs generating SQL directly.
Lopitaux explained:
With our technology, we don't use LLMs to generate the queries. We use LLMs to help us generate the search tokens, and then the search tokens are converted to SQL, which means you have no hallucinations in the results you are getting, which I think is fundamental.
ThoughtSpot's argument is that embedded AI analytics capabilities in business platforms (and almost all enterprise platforms out there are now providing this) may work for simple queries within a single system, but fall short when enterprises need analytics that span multiple data sources, require complex joins, or demand auditability and governance. The semantic layer, ThoughtSpot hopes, will become the foundation that makes enterprise-grade AI analytics possible.
Unsurprisingly, ThoughtSpot is pointing to time saved as the ROI driver. Lopitaux cited specific examples: creating a semantic layer in two minutes versus two hours previously, building a Liveboard in two minutes versus four to eight hours, and getting complex research answers in two minutes versus two days.
These time savings should compound across the organization. Rather than data teams spending time on repetitive modeling and dashboard creation work, they can focus on architecture and strategy. Rather than business users waiting days for analytics teams to respond to queries, they can get answers immediately within their workflow.
The "in the flow of work" positioning is also important. Lopitaux emphasized that the agents aren't new standalone tools requiring change management:
It's really like the flow of the current work. Today I'm going to create a semantic model. Now I have a button, I click on it, and I have an assistant. I have a companion that can do the work for me.
As we have seen time and time again, this is a common adoption challenge that every analytics vendor faces: even sophisticated tools fail if users won't engage with them regularly. ThoughtSpot's approach - embedding agents into existing workflows rather than requiring users to switch contexts - aims to help ease that change management challenge.
I asked Lopitaux why ThoughtSpot is calling these capabilities "agentic" rather than simply AI-powered automation or copilots. His response focused on reasoning and action:
Yeah, I mean, it's agentic because it's really like thinking, reasoning, taking actions. It's really more than just, you know, summarizing text or other AI things that you can see here and there. It's really doing stuff. Actually, it's doing stuff for you.
The distinction matters in a market where "agentic AI" is at serious risk of becoming another overused buzzword. Not every 'agent' is built the same. ThoughtSpot's claim is that these agents aren't just suggesting actions or providing recommendations - they're actually creating semantic models, building dashboards, generating code, and organizing visualizations autonomously based on user intent.
Whether this crosses the threshold that enterprise buyers should consider "truly agentic" is debatable. The agents still require human direction and validation. But they're clearly more autonomous than simple autocomplete or suggestion features.
ThoughtSpot's four-agent announcement appears to address real enterprise challenges, but the agents themselves aren't necessarily the most important part of this story. The more important argument, in my mind, is about the need for a proper analytics foundation - semantic models, governance, trust layers - as enterprises deploy AI across their operations.
The vendor's positioning that "you can't just slap an LLM on your data" rings true because it reflects what many organizations are seeing through painful experience. Simple use cases might work with direct LLM-to-database approaches, but enterprise analytics requires the complexity handling that semantic layers provide.
That said, questions do remain and further progress is required. The sequential workflow where each agent's output feeds the next is sensible, but it's not yet a fully coordinated agent system where agents communicate autonomously. ThoughtSpot acknowledges this is "a natural next step," but buyers should understand the current state versus the future vision.
ThoughtSpot argues that its comprehensive approach delivers better results than point solutions, but enterprises often prefer reducing vendor count and consolidating around platforms. The vendor needs to prove that the analytics foundation it provides is compelling enough to justify a separate platform investment. It will need to sell the ROI and proof points comprehensively.
However, by focusing on trust, explainability, and avoiding hallucinations through its search token architecture, rather than LLM-generated SQL, is smart. Buyers want to be able to take solutions to their leadership that don't expose them to too much human error and silly outcomes that are missed.
Customer stories are going to be key to the vendor's arguments here and we look forward to hearing those customer stories as these agents roll out over the coming months.