3 Sources
3 Sources
[1]
Startup Goodfire Notches $1.25 Billion Valuation to Decode AI Models
Goodfire has raised $150 million, with investors valuing it at $1.25 billion, and plans to work on retraining AI models for better performance and spend more money on computing power and new hires. A growing cadre of multibillion-dollar startups are racing to create the best artificial intelligence models, able to absorb tasks from software engineers, musicians and paralegals. Eric Ho, the chief executive officer of a startup called Goodfire, thinks we're moving too fast. "I think what we're doing right now is quite reckless," he said. "How can we trust and rely on something that we don't understand?" Because so much of how AI works is still mysterious (even to the people building it), Goodfire is focused on examining AI models to understand what they're actually doing, partly in order to improve the technology. The startup just raised $150 million for the effort, with investors valuing it at $1.25 billion. B Capital led the Series B deal, which also included Menlo Ventures and Lightspeed Venture Partners. Goodfire is the latest company to raise millions for AI research, joining a class of startups sometimes called neolabs. Some of these have commanded stratospheric valuations. OpenAI alums Mira Murati and Ilya Sutskever, for example, have secured billions of dollars for their startups, Thinking Machines Labs and Safe Superintelligence, respectively. A new lab called Humans& recently raised $480 million. And AI researcher Richard Socher is in funding talks with investors for his own lab at a $4 billion valuation. Founded in 2024, Goodfire is focused on interpretability, or the science of making sense of AI models to understand what they're doing, debug them, discover new insights in their code, and improve their performance. That's difficult because the weights that underlie an AI models' performance are written in a language that's indecipherable by humans. When developers run into a problem with their models, they have to retrain the models to produce an entirely new set of weights, since they can't understand the existing weights' code to debug it, Ho said. To break down that language barrier, Goodfire creates interpreter models that effectively map the mind of the AI model. Then, he says, the interpreter models can do "brain surgery" to improve the model or derive novel insights from it. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. Frontier biology AI lab Prima Mente, for example, built an AI model that showed promise at predicting Alzheimer's disease, but its team didn't fully understand how the predictions were made. Goodfire's interpreter technology was able to discover a novel class of biomarkers for Alzheimer's disease inside the Prima Mente model, based on connections the model was making that humans previously had not. Goodfire works with customers including Microsoft Corp., the Mayo Clinic and the nonprofit Arc Institute. Some are working with the lab to build their own AI tools using existing foundation models; while others are model makers aiming to improve their products. Goodfire has raised $209 million to date. Ho said the lab has made progress in creating interpreter models to read and debug existing AI code. With the fresh funding, Goodfire plans to work on retraining AI models for better performance. The startup also wants to spend more money on computing power and new hires. "I don't like the trajectory of AI right now -- we're about to deploy all these systems that we don't understand everywhere," Ho said. "I think that's not good, and I want to change that."
[2]
Goodfire Raises $150 Million to Better Understand AI | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The company's Series B funding round, announced Thursday (Feb. 5), values Goodfire at $1.25 billion, and comes as the startup continues its efforts in the field of interpretability. As Goodfire describes it, interpretability is the science of reverse engineering neural networks and turning those insights "into a universal, model-agnostic platform." "We believe that interpretability is the core toolkit for digital biology," Goodfire wrote in its Thursday blog post. "As large foundation models become central to digital science, interpretability methods are our microscope for understanding what the models have learned from the vast data they've seen." In an interview with Bloomberg News, Goodfire CEO Eric Ho suggested that the technology field is moving too fast in developing AI models. "I think what we're doing right now is quite reckless," he said. "How can we trust and rely on something that we don't understand?" Goodfire said its goal is to create a future in which it can understand the fundamentals of what AI models are doing and use that understanding to develop models in a "more principled, aligned, and useful" fashion. "To that end, we've built a 'model design environment': a platform that uses interpretability-based primitives to get insights from models and data, improve model behavior, and monitor them in production," the blog post said. "We use this environment internally for research, and deploy it forward with our customers, collaborating in a shared environment." As covered here last year when Goodfire announced its $50 million Series A round, the company's Ember platform decodes the neurons inside an AI model, offers direct access to its "internal thoughts," and lets users precisely shape the behaviors and boost the performance of their AI models. In other AI news, PYMNTS wrote earlier this week about the technology's gradual integration into finance departments. Early AI applications, that report said, were centered on pattern recognition like predicting demand, identifying late payments and bolstering forecast accuracy. But eventually, these systems gained confidence, fueled by larger datasets and reinforced by measurable wins. "Folks are just starting to understand that AI isn't just automation with kind of sexier marketing," Finexio CEO and founder Ernest Rolfson told PYMNTS in December. Research in a PYMNTS Intelligence report, "How Agentic AI Went From Zero to CFO Test Runs in 90 Days," shows that nearly 7% of enterprise finance chiefs in the U.S. have already begun using agentic AI in live finance workflows, while another 5% are running pilots.
[3]
Goodfire raises $150 million to improve AI model understanding By Investing.com
Investing.com -- Goodfire, a startup focused on examining AI models to understand their functions, has raised $150 million in Series B funding, reaching a valuation of $1.25 billion. B Capital led the investment round, with participation from Menlo Ventures and Lightspeed Venture Partners. The funding brings Goodfire's total raised capital to $209 million since its founding in 2024. Goodfire specializes in interoperability, which involves making sense of AI models to understand their operations, debug them, discover new insights in their code, and enhance their performance. The company works with clients including Microsoft Corp., the Mayo Clinic, and the nonprofit Arc Institute. Some clients use Goodfire to build their own AI tools with existing foundation models, while others are model creators seeking to improve their products. With the new funding, Goodfire plans to focus on retraining AI models for better performance, invest in computing power, and hire more staff. This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
Share
Share
Copy Link
Goodfire raised $150 million in Series B funding, reaching a $1.25 billion valuation, to advance AI interpretability research. The startup focuses on understanding how AI models work by creating interpreter models that map and debug neural networks. CEO Eric Ho warns the industry is moving too fast without understanding the systems being deployed.
Goodfire has secured $150 million in Series B funding at a $1.25 billion valuation, joining a growing class of AI research startups focused on understanding rather than just building artificial intelligence systems
1
. B Capital led the investment round, with participation from Menlo Ventures and Lightspeed Venture Partners, bringing the company's total raised capital to $209 million since its founding in 20243
. The AI funding comes as CEO Eric Ho voices concerns about the pace of AI deployment, stating, "I think what we're doing right now is quite reckless. How can we trust and rely on something that we don't understand?"1
.
Source: Bloomberg
Goodfire specializes in AI interpretability, which the company describes as "the science of reverse engineering neural networks" to understand what AI models are actually doing
2
. The challenge stems from the fact that the weights underlying an AI model's performance are written in a language indecipherable by humans. When developers encounter problems, they typically must retrain models entirely to produce new weights, since they cannot understand the existing code to debug it1
. Eric Ho and his team have built interpreter models that effectively map the mind of AI models, enabling what he calls "brain surgery" to improve AI performance or derive novel insights1
.
Source: PYMNTS
The startup's AI model understanding capabilities have already demonstrated practical value across multiple sectors. Goodfire works with customers including Microsoft Corp., the Mayo Clinic, and the nonprofit Arc Institute
1
. In one notable case, frontier biology AI lab Prima Mente built an AI model showing promise at predicting Alzheimer's disease, but the team didn't fully understand how predictions were made. Goodfire's interpreter technology discovered a novel class of biomarkers for Alzheimer's disease inside the Prima Mente model, based on connections the model was making that humans previously had not identified1
. The company's Ember platform decodes the neurons inside an AI model, offers direct access to its "internal thoughts," and lets users precisely shape behaviors to improve AI performance2
.Related Stories
With the fresh Series B funding, Goodfire plans to expand beyond debugging AI into retraining models for better performance. The company will invest heavily in computing power and new hires to support this evolution
1
. Ho said the lab has made progress in creating interpreter models to read and debug existing AI code, and now aims to use that foundation to build a "model design environment" that uses interpretability-based primitives to get insights from foundation models and data, improve model behavior, and monitor them in production2
. The company envisions a future where it can understand the fundamentals of what AI models are doing and use that AI model understanding to develop models in a "more principled, aligned, and useful" fashion2
.Goodfire joins a class of startups sometimes called neolabs that have commanded stratospheric valuations in recent AI funding rounds. OpenAI alums Mira Murati and Ilya Sutskever have secured billions of dollars for their startups, Thinking Machines Labs and Safe Superintelligence, respectively. A new lab called Humans& recently raised $480 million, and AI researcher Richard Socher is in funding talks with investors for his own lab at a $4 billion valuation
1
. As large foundation models become central to digital science, Goodfire believes interpretability methods serve as "our microscope for understanding what the models have learned from the vast data they've seen"2
. Ho's warning that "we're about to deploy all these systems that we don't understand everywhere" underscores the urgency of developing robust debugging AI capabilities before neural networks become even more deeply embedded in critical infrastructure1
.Summarized by
Navi
1
Business and Economy

2
Policy and Regulation

3
Policy and Regulation
