4 Sources
4 Sources
[1]
Startup Goodfire Notches $1.25 Billion Valuation to Decode AI Models
Goodfire has raised $150 million, with investors valuing it at $1.25 billion, and plans to work on retraining AI models for better performance and spend more money on computing power and new hires. A growing cadre of multibillion-dollar startups are racing to create the best artificial intelligence models, able to absorb tasks from software engineers, musicians and paralegals. Eric Ho, the chief executive officer of a startup called Goodfire, thinks we're moving too fast. "I think what we're doing right now is quite reckless," he said. "How can we trust and rely on something that we don't understand?" Because so much of how AI works is still mysterious (even to the people building it), Goodfire is focused on examining AI models to understand what they're actually doing, partly in order to improve the technology. The startup just raised $150 million for the effort, with investors valuing it at $1.25 billion. B Capital led the Series B deal, which also included Menlo Ventures and Lightspeed Venture Partners. Goodfire is the latest company to raise millions for AI research, joining a class of startups sometimes called neolabs. Some of these have commanded stratospheric valuations. OpenAI alums Mira Murati and Ilya Sutskever, for example, have secured billions of dollars for their startups, Thinking Machines Labs and Safe Superintelligence, respectively. A new lab called Humans& recently raised $480 million. And AI researcher Richard Socher is in funding talks with investors for his own lab at a $4 billion valuation. Founded in 2024, Goodfire is focused on interpretability, or the science of making sense of AI models to understand what they're doing, debug them, discover new insights in their code, and improve their performance. That's difficult because the weights that underlie an AI models' performance are written in a language that's indecipherable by humans. When developers run into a problem with their models, they have to retrain the models to produce an entirely new set of weights, since they can't understand the existing weights' code to debug it, Ho said. To break down that language barrier, Goodfire creates interpreter models that effectively map the mind of the AI model. Then, he says, the interpreter models can do "brain surgery" to improve the model or derive novel insights from it. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. Frontier biology AI lab Prima Mente, for example, built an AI model that showed promise at predicting Alzheimer's disease, but its team didn't fully understand how the predictions were made. Goodfire's interpreter technology was able to discover a novel class of biomarkers for Alzheimer's disease inside the Prima Mente model, based on connections the model was making that humans previously had not. Goodfire works with customers including Microsoft Corp., the Mayo Clinic and the nonprofit Arc Institute. Some are working with the lab to build their own AI tools using existing foundation models; while others are model makers aiming to improve their products. Goodfire has raised $209 million to date. Ho said the lab has made progress in creating interpreter models to read and debug existing AI code. With the fresh funding, Goodfire plans to work on retraining AI models for better performance. The startup also wants to spend more money on computing power and new hires. "I don't like the trajectory of AI right now -- we're about to deploy all these systems that we don't understand everywhere," Ho said. "I think that's not good, and I want to change that."
[2]
Goodfire raises $150M in funding to enhance its AI interpretability platform - SiliconANGLE
Goodfire raises $150M in funding to enhance its AI interpretability platform Goodfire Inc., a startup working to uncover how artificial intelligence models make decisions, has raised $150 million in funding. B Capital led the Series B round. Goodfire stated in its funding announcement today that the deal also drew contributions from Salesforce Inc., former Google chief executive Eric Schmidt and more than half a dozen others. The company is now worth $1.25 billion. A large language model consists of code snippets called artificial neurons. Those code snippets often have a simple design, but they interact in complex ways: upwards of tens of thousands of neurons are involved in generating a prompt response. LLMs' complexity makes it difficult to determine how they make decisions. San Francisco-based Goodfire is working to ease the task. The company has built a platform that it calls a model design environment to map out LLMs' internal components. According to Goodfire, understanding how a model goes about processing data makes it easier to identify and fix flaws in its design. The platform's first component focuses on the LLM training phase. Goodfire says that researchers have often limited visibility into how a neural network learns new skills from its training dataset. The company's platform maps out the training workflow and identifies flaws, which enables researchers to boost LLM output quality. The second component of Goodfire's platform monitors models' performance once development is complete and they're running in production. The company says that it reduced AI hallucinations by half in one recent project. One of Goodfire's first customers is healthcare AI startup Prima Mente Inc. The latter company has developed an AI model that analyzes particles called cfDNA fragments to detect Alzheimer's disease. According to Goodfire, its researchers analyzed the algorithm and discovered that it mainly considers the length of cfDNA fragments when diagnosing patients. Existing scientific literature didn't contain data on the diagnostic significance of cfDNA fragment length. Last year, Goodfire developed a method called SPD to understand how LLMs process data. It works by identifying model components that may be involved in generating a prompt response and removing them one by one. If the removal of a component doesn't affect an LLM's output, researchers can conclude that it's not involved in the processing workflow. "Interpretability, for us, is the toolset for a new domain of science: a way to form hypotheses, run experiments, and ultimately design intelligence rather than stumbling into it," said Goodfire CEO Eric Ho.
[3]
Goodfire Raises $150 Million to Better Understand AI | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The company's Series B funding round, announced Thursday (Feb. 5), values Goodfire at $1.25 billion, and comes as the startup continues its efforts in the field of interpretability. As Goodfire describes it, interpretability is the science of reverse engineering neural networks and turning those insights "into a universal, model-agnostic platform." "We believe that interpretability is the core toolkit for digital biology," Goodfire wrote in its Thursday blog post. "As large foundation models become central to digital science, interpretability methods are our microscope for understanding what the models have learned from the vast data they've seen." In an interview with Bloomberg News, Goodfire CEO Eric Ho suggested that the technology field is moving too fast in developing AI models. "I think what we're doing right now is quite reckless," he said. "How can we trust and rely on something that we don't understand?" Goodfire said its goal is to create a future in which it can understand the fundamentals of what AI models are doing and use that understanding to develop models in a "more principled, aligned, and useful" fashion. "To that end, we've built a 'model design environment': a platform that uses interpretability-based primitives to get insights from models and data, improve model behavior, and monitor them in production," the blog post said. "We use this environment internally for research, and deploy it forward with our customers, collaborating in a shared environment." As covered here last year when Goodfire announced its $50 million Series A round, the company's Ember platform decodes the neurons inside an AI model, offers direct access to its "internal thoughts," and lets users precisely shape the behaviors and boost the performance of their AI models. In other AI news, PYMNTS wrote earlier this week about the technology's gradual integration into finance departments. Early AI applications, that report said, were centered on pattern recognition like predicting demand, identifying late payments and bolstering forecast accuracy. But eventually, these systems gained confidence, fueled by larger datasets and reinforced by measurable wins. "Folks are just starting to understand that AI isn't just automation with kind of sexier marketing," Finexio CEO and founder Ernest Rolfson told PYMNTS in December. Research in a PYMNTS Intelligence report, "How Agentic AI Went From Zero to CFO Test Runs in 90 Days," shows that nearly 7% of enterprise finance chiefs in the U.S. have already begun using agentic AI in live finance workflows, while another 5% are running pilots.
[4]
Goodfire raises $150 million to improve AI model understanding By Investing.com
Investing.com -- Goodfire, a startup focused on examining AI models to understand their functions, has raised $150 million in Series B funding, reaching a valuation of $1.25 billion. B Capital led the investment round, with participation from Menlo Ventures and Lightspeed Venture Partners. The funding brings Goodfire's total raised capital to $209 million since its founding in 2024. Goodfire specializes in interoperability, which involves making sense of AI models to understand their operations, debug them, discover new insights in their code, and enhance their performance. The company works with clients including Microsoft Corp., the Mayo Clinic, and the nonprofit Arc Institute. Some clients use Goodfire to build their own AI tools with existing foundation models, while others are model creators seeking to improve their products. With the new funding, Goodfire plans to focus on retraining AI models for better performance, invest in computing power, and hire more staff. This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
Share
Share
Copy Link
Goodfire has raised $150 million in Series B funding at a $1.25 billion valuation to advance AI interpretability research. The startup creates interpreter models to decode neural networks and understand how AI models make decisions. CEO Eric Ho warns the industry is moving too fast without understanding the technology it's deploying.
Goodfire has closed a $150 million in Series B funding round led by B Capital, with participation from Menlo Ventures and Lightspeed Venture Partners, along with contributions from Salesforce and former Google CEO Eric Schmidt
2
. The deal values the San Francisco-based startup at $1.25 billion valuation, bringing its total raised capital to $209 million since its founding in 20241
4
. The fresh capital will fund the startup's mission to decode AI models and make their decision-making processes transparent, addressing what CEO Eric Ho calls a "reckless" approach to AI development.
Source: Bloomberg
Goodfire specializes in AI interpretability, the science of reverse engineering neural networks to understand what AI models are actually doing when they process information
3
. The challenge stems from the fundamental architecture of large language models (LLMs), which consist of artificial neurons that interact in extraordinarily complex ways—tens of thousands of neurons can be involved in generating a single prompt response2
. The weights underlying an AI model's performance are written in code that's indecipherable to humans, making it nearly impossible for developers to debug problems without completely retraining models1
.To break through this opacity, Goodfire has built what it calls a model design environment—a platform that uses AI interpretability methods to map out the internal components of foundation models
2
3
. The company creates interpreter models that effectively map the mind of an AI model, then perform what Ho describes as "brain surgery" to improve performance or derive novel insights1
. The platform operates across two critical phases: during training, it maps out the learning workflow and identifies flaws to boost output quality, and in production, it monitors model performance to catch issues like AI hallucinations, which Goodfire claims to have reduced by half in one recent project2
.
Source: PYMNTS
The practical applications of Goodfire's technology extend into critical healthcare domains. Prima Mente, a healthcare AI startup, developed an AI model to detect Alzheimer's disease by analyzing cfDNA fragments, but the team didn't fully understand how the model made its predictions
1
. Using Goodfire's interpreter technology, researchers discovered a novel class of Alzheimer's biomarkers based on the length of cfDNA fragments—connections the model was making that humans had not previously identified in existing scientific literature2
. This breakthrough demonstrates how debugging AI can unlock scientific insights hidden within neural networks.Related Stories
Goodfire works with major clients including Microsoft, the Mayo Clinic, and the nonprofit Arc Institute
4
. Some organizations use the platform to build their own AI tools using existing foundation models, while model makers leverage it to enhance AI performance. Last year, the company developed a method called SPD that identifies which model components are involved in generating prompt responses by systematically removing components and observing the effects on output2
. Eric Ho frames this work as essential: "Interpretability, for us, is the toolset for a new domain of science: a way to form hypotheses, run experiments, and ultimately design intelligence rather than stumbling into it"2
.With the new funding, Goodfire plans to advance from debugging existing models to retraining AI models for better performance, while also investing heavily in computing power and expanding its team
1
4
. The startup joins a growing class of AI research companies sometimes called neolabs that have commanded significant valuations. OpenAI alums Mira Murati and Ilya Sutskever have secured billions for their ventures, while AI researcher Richard Socher is in funding talks at a $4 billion valuation1
. Ho's concerns about the current trajectory remain stark: "I don't like the trajectory of AI right now—we're about to deploy all these systems that we don't understand everywhere. I think that's not good, and I want to change that"1
.Summarized by
Navi
[2]
1
Technology

2
Policy and Regulation

3
Science and Research
