2 Sources
2 Sources
[1]
SurrealDB 3.0 wants to replace your five-database RAG stack with one
Building retrieval-augmented generation (RAG) systems for AI agents often involves using multiple layers and technologies for structured data, vectors and graph information. In recent months it has also become increasingly clear that agentic AI systems need memory, sometimes referred to as contextual memory, to operate effectively. The complexity and synchronization of having different data layers to enable context can lead to performance and accuracy issues. It's a challenge that SurrealDB is looking to solve. SurrealDB on Tuesday launched version 3.0 of its namesake database alongside a $23 million Series A extension, bringing total funding to $44 million. The company had taken a different architectural approach than relational databases like PostgreSQL, native vector databases like Pinecone or a graph database like Neo4j. The OpenAI engineering team recently detailed how it scaled Postgres to 800 million users using read replicas -- an approach that works for read-heavy workloads. SurrealDB takes a different approach: Store agent memory, business logic, and multi-modal data directly inside the database. Instead of synchronizing across multiple systems, vector search, graph traversal, and relational queries all run transactionally in a single Rust-native engine that maintains consistency. "People are running DuckDB, Postgres, Snowflake, Neo4j, Quadrant or Pinecone all together, and then they're wondering why they can't get good accuracy in their agents," CEO and co-founder Tobie Morgan Hitchcock told VentureBeat. "It's because they're having to send five different queries to five different databases which only have the knowledge or the context that they deal with." The architecture has resonated with developers, with 2.3 million downloads and 31,000 GitHub stars to date for the database. Existing deployments span edge devices in cars and defense systems, product recommendation engines for major New York retailers, and Android ad serving technologies, according to Hitchcock. Agentic AI memory baked into the database SurrealDB stores agent memory as graph relationships and semantic metadata directly in the database, not in application code or external caching layers. The Surrealism plugin system in SurrealDB 3.0 lets developers define how agents build and query this memory; the logic runs inside the database with transactional guarantees rather than in middleware. Here's what that means in practice: When an agent interacts with data, it creates context graphs that link entities, decisions and domain knowledge as database records. These relationships are queryable through the same SurrealQL interface used for vector search and structured data. An agent asking about a customer issue can traverse graph connections to related past incidents, pull vector embeddings of similar cases, and join with structured customer data -- all in one transactional query. "People don't want to store just the latest data anymore," Hitchcock said. "They want to store all that data. They want to analyze and have the AI understand and run through all the data of an organization over the last year or two, because that informs their model, their AI agent about context, about history, and that can therefore deliver better results." How SurrealDB's architecture differs from traditional RAG stacks Traditional RAG systems query databases based on data types. Developers write separate queries for vector similarity search, graph traversal, and relational joins, then merge results in application code. This creates synchronization delays as queries round-trip between systems. In contrast, Hitchcock explained that SurrealDB stores data as binary-encoded documents with graph relationships embedded directly alongside them. A single query through SurrealQL can traverse graph relationships, perform vector similarity searches, and join structured records without leaving the database. That architecture also affects how consistency works at scale: Every node maintains transactional consistency, even at 50+ node scale, Hitchcock said. When an agent writes new context to node A, a query on node B immediately sees that update. No caching, no read replicas. "A lot of our use cases, a lot of our deployments are where data is constantly updated and the relationships, the context, the semantic understanding, or the graph connections between that data needs to be constantly refreshed," he said. "So no caching. There's no read replicas. In SurrealDB, every single thing is transactional." What this means for enterprise IT "It's important to say SurrealDB is not the best database for every task. I'd love to say we are, but it's not. And you can't be," Hitchcock said. "If you only need analysis over petabytes of data and you're never really updating that data, then you're going to be best going with object storage or a columnar database. If you're just dealing with vector search, then you can go with a vector database like Quadrant or Pinecone, and that's going to suffice." The inflection point comes when you need multiple data types together. The practical benefit shows up in development timelines. What used to take months to build with multi-database orchestration can now launch in days, Hitchcock said.
[2]
SurrealDB raises $23M to expand AI-native multimodel database - SiliconANGLE
SurrealDB Inc. today revealed that it has raised an additional $23 million in funding for its multimodel artificial intelligence-native database. The plan is to accelerate product maturity and adoption and expand its team to scale up its cloud offering and deepen support for production deployments. Founded in 2021, SurrealDB offers a multimodel database product that supports applications that combine structured data, graph relationships and machine learning workloads within a single system. The company's platform addresses the challenge of maintaining consistent state, contextual relationships and persistent memory for AI agents as data volume and complexity increase. It works by consolidating models inside one engine so that application logic and contextual data can be stored and queried together, rather than relying on separate relational, document and vector databases connected through external services. Under the hood, the database is written in Rust and supports relational, document, graph, time-series, vector, geospatial and key-value data models through a custom unified query language called SurrealQL. The custom query language allows for structured records, embeddings and multimodal data such as images and audio to be stored and queried within the same environment. The platform also includes an embedded logic layer that allows developers to define computed fields, record references and custom API endpoints directly within the database. SurrealDB is designed for cloud-native and distributed deployments, with support for real-time queries, graph traversal and embedded logic suitable for AI applications, transactional systems and data-driven services. The company claims that its database has become the fastest-growing database of all time, having been downloaded 2.3 million times, having attracted 31,000 GitHub stars and having more than 1,000 forks. Notable SurrealDB customers include Verizon Communications Inc., Walmart Inc., ING Groep NV, Nvidia Corp., Samsung Electronics Co. Ltd., Tencent Holdings Ltd. and Poly AI Ltd. The new funding, an extension of the company's previous Series A round, saw Chalfen Ventures and Begin Capital join existing investors FirstMark Capital and Georgian Partners to bring the full Series A round to $38 million. "Every compute era requires a new database paradigm," said Chalfen Ventures founder Mike Chalfen. "We are in the AI era, but most ambitious enterprise AI projects stall. They need a data platform that makes unprecedentedly large-scale contextual information available to agentic systems in a way that is synchronized across data sources, fast and secure." SurrealDB, he said, is that platform. "It meets the needs of both AI agents and enterprise data governance," he said. "It is the best onramp for companies looking to get native AI initiatives off the ground and I believe that it can shape what it means for a business to be agent-ready." The funding extension comes as SurrealDB 3.0 is released into general availability. The company pitches it as being the most stable, performant and enterprise-ready release of the database. SurrealDB 3.0 introduces architectural updates aimed at improving reliability and operational consistency, including a redesigned on-disk document representation, separation of stored values from executable expressions, ID-based metadata storage and synchronized writes enabled by default. The release expands support for vector indexing and search, multimodal data storage and agent memory through context graphs embedded directly within the database layer. It also adds computed fields, record references and a plugin framework known as Surrealism. It allows business logic and access controls to be implemented as transactional, version-controlled modules inside the database runtime.
Share
Share
Copy Link
SurrealDB launched version 3.0 of its multimodel database alongside a $23 million Series A extension, bringing total funding to $44 million. The company aims to solve a critical challenge for AI agents: eliminating the complexity of running multiple databases for structured data, vectors, and graph relationships by consolidating everything into a single Rust-native engine with transactional consistency.
SurrealDB announced the launch of SurrealDB 3.0 alongside a $23 million Series A extension on Tuesday, bringing its total funding to $44 million
1
2
. The funding round saw Chalfen Ventures and Begin Capital join existing investors FirstMark Capital and Georgian Partners, with the full Series A now totaling $38 million2
. The company plans to accelerate product maturity, expand its team to scale cloud offerings, and deepen support for production deployments2
.
Source: SiliconANGLE
Founded in 2021, SurrealDB has attracted significant developer interest, with 2.3 million downloads and 31,000 GitHub stars to date
1
2
. The company claims to be the fastest-growing database of all time, with more than 1,000 forks and notable customers including Verizon, Walmart, ING, Nvidia, Samsung, Tencent, and Poly AI2
.Building Retrieval-Augmented Generation systems for AI agents typically requires multiple technologies for structured data, vectors, and graph information. This fragmentation creates performance and accuracy issues that SurrealDB aims to solve with its AI-native multimodel database approach
1
. "People are running DuckDB, Postgres, Snowflake, Neo4j, Quadrant or Pinecone all together, and then they're wondering why they can't get good accuracy in their agents," CEO and co-founder Tobie Morgan Hitchcock explained to VentureBeat. "It's because they're having to send five different queries to five different databases which only have the knowledge or the context that they deal with"1
.The platform consolidates relational, document, graph, time-series, vector, geospatial, and key-value data models through a custom unified query language called SurrealQL
2
. Instead of synchronizing across multiple systems, vector search, graph traversal, and relational queries all run transactionally in a single Rust-native engine that maintains consistency1
.
Source: VentureBeat
SurrealDB stores agent memory as graph relationships and semantic metadata directly in the database, not in application code or external caching data layers
1
. The Surrealism plugin framework in SurrealDB 3.0 lets developers define how AI agents build and query this memory, with the logic running inside the database with transactional guarantees rather than in middleware1
2
.When an agent interacts with data, it creates context graphs that link entities, decisions, and domain knowledge as database records
1
. An agent asking about a customer issue can traverse graph connections to related past incidents, pull vector embeddings of similar cases, and join with structured customer data—all in one transactional query1
.Related Stories
SurrealDB stores data as binary-encoded documents with graph relationships embedded directly alongside them
1
. A single query through SurrealQL can traverse graph relationships, perform vector search, and join structured records without leaving the database, eliminating the synchronization delays of traditional approaches1
.Every node maintains transactional consistency, even at 50+ node scale, according to Hitchcock
1
. When an agent writes new context to node A, a query on node B immediately sees that update. "A lot of our use cases, a lot of our deployments are where data is constantly updated and the relationships, the context, the semantic understanding, or the graph connections between that data needs to be constantly refreshed," he said. "So no caching. There's no read replicas. In SurrealDB, every single thing is transactional"1
.SurrealDB 3.0 introduces architectural updates aimed at improving reliability and operational consistency, including a redesigned on-disk document representation, separation of stored values from executable expressions, ID-based metadata storage, and synchronized writes enabled by default
2
. The release expands support for vector indexing and search, multimodal data storage including images and audio, and computed fields2
.Existing deployments span edge devices in cars and defense systems, product recommendation engines for major New York retailers, and Android ad serving technologies
1
. The platform includes an embedded logic layer that allows developers to define computed fields, record references, and custom API endpoints directly within the database2
.Hitchcock acknowledged limitations: "It's important to say SurrealDB is not the best database for every task. If you only need analysis over petabytes of data and you're never really updating that data, then you're going to be best going with object storage or a columnar database. If you're just dealing with vector search, then you can go with a vector database like Quadrant or Pinecone"
1
. However, for organizations building AI agents that require constant data updates and contextual understanding across multiple data types, the multimodel database approach offers a path to simplify RAG stack complexity while maintaining transactional consistency at scale.Summarized by
Navi
11 Oct 2024•Technology

10 Apr 2025•Technology

09 Oct 2024•Technology

1
Policy and Regulation

2
Technology

3
Technology
