3 Sources
[1]
What happens when AI starts building itself? | TechCrunch
Richard Socher has been a major figure in AI for some time, best known for founding the early chatbot startup You.com and, before that, his work on Imagenet. Now, he's joining the current generation of research-focused AI startups with Recursive Superintelligence, a San Francisco-based startup that came out of stealth on Wednesday with $650 million in funding. Socher is joined in the new venture by a cohort of prominent AI researchers, including Peter Norvig and Cresta co-founder Tim Shi. Together, they're working to create a recursively self-improving AI model, one that can autonomously identify its own weaknesses and redesign itself to fix them, without human involvement -- a long-held holy grail of contemporary AI research. I spoke with him on Zoom after the launch, digging into Recursive's unique technical approach and why he doesn't think of this new project as a neolab, he informal term for a new generation of AI startups that prioritize research over building products. This interview has been edited for length and clarity. We hear a lot about recursion these days! It feels like a very common goal across different labs. What do you see as your unique approach? Our unique approach is to use open-endedness to get to recursive self-improvement, which no one has yet achieved. It's an elusive goal for a lot of people. A lot of people already assume it happens when you just do auto-research. You know, you can take AI and ask it to make some other thing better, which could be a machine learning system, or just a letter that you write, or, you know, whatever it might be, right? But that's not recursive self-improvement. That's just improvement. Our main focus, is to build truly recursive, self-improving superintelligence at scale, which means that the entire process of ideation, implementation and validation of research ideas would be automatic. First [it would automate] AI research ideas, eventually any kind of research ideas, even eventually in the physical domains. But it's particularly powerful when it's AI working on itself, and it's developing a new kind of sense of self awareness of its own shortcomings. You used the term open-ended -- does that have a specific technical meaning? It does. In fact, Tim Rocktäschel, one of our cofounders, led the open-endedness and self-improvement teams at Google DeepMind and particularly worked on the world model Genie 3, which is a great example of open-endedness. You can tell it any concept, any world, any agent, and it just creates it, and it's interactive. In biological evolution, animals adapt to the environment, and then others counter-adapt to those adaptations. It's just a process that can evolve for billions of years, and interesting stuff keeps happening, right? That's how we developed eyes in our [heads]. So, red teaming also has to be done in an LLM context. Basically you try to get the LLM to tell you how to build a bomb, and you want to make sure that it doesn't do it. Now, humans can sit there for a long time and come up with interesting examples of what the AI shouldn't say. But what if you tested this first AI with a second AI, and that second AI now has the task of making the first AI [try to] say all the possible bad things. And then they can go back and forth for millions of iterations. You can actually allow two AIs to co-evolve. One keeps attacking the other, and then comes up with not just one angle but many different angles, and hence the rainbow analogy. And then you can inoculate the first AI, and you become safer and safer. This was an idea from Tim Rocktaeschel, and it's now used in all the major labs. How do you know when it's done? I suppose it's never done. Some of these things will never be done. You can always get more intelligent. You can always get better at programming and math and so on. There are some bounds on intelligence; I'm actually trying to formalize those right now, but they're astronomical. We're very far away from those limits. As a neolab, it feels like you're supposed to be doing something that the major labs aren't doing. So part of the implication here is that you don't think the major labs are going to reach RSI [recursive self-improvement] by doing what they're doing. Is that fair to say? I can't really comment on what they're doing, but I do think we're approaching it differently. We really embrace the concept of open-endedness, and our team is entirely focused on that vision. And the team has been researching this and doing papers in this space for the last decade. And the team has a track record of really pushing the field forward significantly and shipping real products. You know, Tim Shi built Cresta into a unicorn. Josh Tobin was one of the first people at OpenAI and eventually led their Codex teams and the deep research teams. I actually sometimes struggle a little bit with this neolab category. I feel like we're not just a lab. I want us to be become a really viable company, to really have amazing products that people love to use, that have positive impact on humanity. So when do you plan to ship your first product? I've thought about that a lot. The team has made so much progress, we may actually pull up the timelines from what we had initially assumed. But yes, there will be products, and you'll have to wait quarters, not years. One of the ideas around recursive self-improvement is that, once we have this sort of system, compute becomes the only important resource. The faster you run the system, the faster it will improve, and there's no outside human activity that will really make a difference. So the race just becomes, how much processing power can we throw at this? Do you think that's the world we're headed toward? Compute is not to be underestimated. I think in the future, a really important question will be: how much compute does humanity want to spend to solve which problems? Here's this cancer and here's that virus -- which one do you want to solve first? How much compute do you want to give it? It becomes a matter of resource allocation eventually. It's going to be one of the biggest questions in the world.
[2]
Recursive Superintelligence raises $650m at $4.65bn valuation to build self-improving AI
Recursive Superintelligence, a startup founded by former leaders from Meta AI, Google DeepMind, OpenAI, and Salesforce AI, has emerged from stealth with $650 million in funding at a $4.65 billion valuation. Led by Richard Socher and co-founded by ex-Meta FAIR director Yuandong Tian, the company is pursuing recursive self-improvement: AI systems that autonomously improve themselves in an accelerating loop. GV, Greycroft, Nvidia, and AMD backed the round. The startup has fewer than 30 employees and no released product. The idea that an AI system could improve itself, then use those improvements to improve itself again, faster, in an accelerating loop that eventually outpaces every human researcher on earth, has been a fixture of computer science folklore since at least the 1960s. For most of that time, it remained comfortably theoretical. Now someone has raised $650 million to build it. Recursive Superintelligence, a startup founded by former leaders from Meta AI, Google DeepMind, OpenAI, Salesforce AI, and Uber AI, emerged from stealth on 13 May with a $4.65 billion valuation and a thesis that would have sounded like science fiction two years ago but now sits squarely within the Overton window of Silicon Valley ambition. The company's stated mission: build AI systems that can autonomously discover knowledge, continuously optimise themselves, and evolve in an open-ended loop, much like biological evolution, but without the inconvenience of waiting millions of years. The team behind the loop The round was led by GV, Alphabet's venture capital arm, and Greycroft, with participation from Nvidia and AMD, the two chipmakers whose hardware underpins virtually all frontier AI training. The involvement of both companies is notable: strategic investment from the firms that sell the picks and shovels suggests they see recursive self-improvement not as a theoretical curiosity but as a near-term compute customer. The founding team is built to signal credibility. Richard Socher, the former chief scientist at Salesforce and founder of the AI search engine You.com, leads the company alongside seven co-founders: Yuandong Tian, formerly a research scientist director at Meta's Fundamental AI Research lab (FAIR), where he led work on reinforcement learning, LLM reasoning, and AI-guided optimisation; Tim Rocktaschel, a professor of AI at University College London and former principal scientist at Google DeepMind; Alexey Dosovitskiy, one of the authors of the Vision Transformer (ViT), the 2020 paper that reshaped computer vision research; Josh Tobin, formerly of OpenAI; Caiming Xiong; Tim Shi; and Jeff Clune. Peter Norvig, co-author of Artificial Intelligence: A Modern Approach, the standard university textbook in the field, serves as an adviser. Tian Yuandong's involvement is particularly striking. A graduate of Shanghai Jiao Tong University who went on to earn a PhD in robotics from Carnegie Mellon, Tian spent over a decade at Meta FAIR, where his work spanned some of the most consequential problems in modern AI research. He led the DarkForest Go project, a CNN-based Go AI developed before DeepMind's AlphaGo captured global attention, and later became lead scientist on ELF OpenGo. His departure from Meta and immediate entry into a startup pursuing the most ambitious goal in the field is itself a signal: the talent that built the current generation of AI systems is now betting that the next generation can build itself. What recursive self-improvement actually means The concept is deceptively simple. Instead of human researchers designing each new generation of AI, an AI system would automate parts of its own research and development process, generating improvements that in turn make it better at generating improvements. A company that achieves this first would, in theory, be able to extend its lead over competitors exponentially, because its development velocity would be compounding rather than linear. Recursive Superintelligence has outlined a staged roadmap. The first step, according to company materials, is to train a system with the capabilities of "50,000 doctors" to automate AI scientific research itself. From there, the company plans to run what it calls a "Level 1" autonomous training system, with a public launch targeted for mid-2026. The funding will be used in part to secure the large-scale compute infrastructure required to run these experiments. The company currently operates from offices in San Francisco and London, with a team that has expanded beyond 25 researchers and engineers. The round was described as heavily oversubscribed. The race is already on Recursive Superintelligence is not pursuing this thesis in isolation. The largest AI laboratories are already using their own models to accelerate research. Anthropic has said that the majority of its code is now written by Claude. OpenAI has reported that GPT-5.5 developed a parallelisation method that boosted token generation speeds by more than 20%. Google DeepMind has built AlphaEvolve, a coding agent designed for scientific and algorithmic discovery. Google co-founder Sergey Brin has reportedly described coding gains as a path to "AI takeoff" internally. What distinguishes Recursive Superintelligence from these efforts is that none of the major laboratories has organised an entire company around recursive self-improvement as its core commercial thesis. OpenAI, Anthropic, and Google DeepMind all use AI to assist their research workflows, but their businesses are built around selling models and API access. Recursive is betting that the self-improvement loop itself is the product. Whether that bet pays off depends on a question that remains genuinely open: whether recursive self-improvement produces the kind of runaway acceleration its proponents describe, or whether it converges on diminishing returns as each cycle of improvement yields smaller gains. Anthropic co-founder Jack Clark has estimated a roughly 60% probability that a system capable of training a more powerful successor on its own, without human involvement, will exist by the end of 2028, and a 30% chance by 2027. For now, what is certain is the price the market has placed on the possibility. Recursive Superintelligence is four months old, has fewer than 30 employees, and has not released a product. It is valued at $4.65 billion. In the current AI investment climate, the promise of a machine that can improve itself is apparently worth more than many companies that have already built one.
[3]
Recursive Superintelligence raises $650M to build self-improving AI models - SiliconANGLE
Recursive Superintelligence raises $650M to build self-improving AI models Recursive Superintelligence Inc., a startup that hopes to develop self-improving artificial intelligence models, launched today with $650 million in funding. Alphabet Inc.'s GV fund and Greycroft led the round. They were joined by Nvidia Corp. and Advanced Micro Devices Inc.'s venture capital arm. Recursive says that the investment values it at $4.65 billion. The company was founded earlier this year by former Salesforce Inc. chief scientist Richard Socher. He earlier launched You.com Inc., a provider of application programming interfaces that AI models use to perform online research. The startup received a $1.5 billion valuation last year. According to the New York Times, Recursive's initial team comprised Socher and 6 other staffers. The company now has more than 25 employees in San Francisco and London. They're working to build so-called recursive self-improving superintelligence, or an AI model that can discover new knowledge similarly to human scientists. Current neural networks can't perform basic research in a fully autonomous manner. As a result, Recursive's first priority is to build an AI model that can improve its own code base. The company hopes that such a model would be capable of discovering how to develop an AI that is as effective as humans at scientific tasks. The company's AI will search for ways to improve itself by carrying out simulations "in an open-ended process of automated scientific discovery." Recursive says that the model will develop experiment ideas, test them and then validate the results. The company will develop guardrails to prevent the software from producing risky output. According to Recursive, the experiments carried out by its AI model will focus on improving not only its code but also its harness. A harness is a set of auxiliary programs that AI providers use to enhance the output of their algorithms. Furthermore, Recursive's system will search for ways to improve its training and inference infrastructure. OpenAI Group PBC is already using its recently released GPT-5.5 model to that end. The company splits each inference request into so-called chunks and spreads them chunks across multiple graphics card cores to speed up processing. Until recently, the number of chunks involved in the workflow was fixed. According to OpenAI, GPT-5.5 developed a more efficient parallelization method that boosted token generation speeds by more than 20%. Some companies are using AI to enhance not only their inference workflows but also the underlying hardware. Recursive investor Alphabet, for example, designs its TPU accelerators with the help of a neural network trained on chip blueprints. The creators of the system recently launched a startup called Ricursive Intelligence Inc. to make similar technology available for other companies. Recursive didn't disclose what machine learning methods will power its self-improving AI. Rival Ineffable Intelligence Ltd., which also hopes to develop models that can discover new knowledge, is using reinforcement learning. That's an AI model technique commonly used in large language model projects. "We will start with AI research itself but eventually hope to expand its aperture to physics, chemistry and especially pre-clinical biology," Socher wrote in a post on X. "AI will be to biology what calculus was to physics - a new language and way of thinking that deals with complex systems and helps us understand and engineer them better."
Share
Copy Link
Recursive Superintelligence emerged from stealth with $650 million in funding at a $4.65 billion valuation. Founded by Richard Socher and AI leaders from Meta, Google DeepMind, and OpenAI, the startup aims to create self-improving AI that autonomously identifies its own weaknesses and redesigns itself without human involvement—a long-held goal in AI research.

Recursive Superintelligence emerged from stealth on May 13 with $650 million in funding at a $4.65 billion valuation, pursuing what many consider the holy grail of artificial intelligence research. The San Francisco-based startup, led by former Salesforce chief scientist Richard Socher, aims to create a recursively self-improving AI model that can autonomously identify its own weaknesses and redesign itself without human involvement
1
. The funding round was led by GV, Alphabet's venture capital arm, and Greycroft, with strategic participation from chipmakers Nvidia and AMD2
.The founding team brings together prominent AI researchers from across the industry. Richard Socher, best known for founding You.com and his work on ImageNet, leads the venture alongside seven co-founders including Yuandong Tian, formerly a research scientist director at Meta's Fundamental AI Research lab (FAIR), and Tim Rocktäschel, who led open-endedness and self-improvement teams at Google DeepMind
1
. Other co-founders include Alexey Dosovitskiy, one of the authors of the Vision Transformer paper, Josh Tobin from OpenAI, Cresta co-founder Tim Shi, and Jeff Clune. Peter Norvig, co-author of the standard university textbook "Artificial Intelligence: A Modern Approach," serves as an adviser2
.What distinguishes Recursive Superintelligence from other labs working on similar problems is its approach to recursive self-improvement through open-endedness in AI development. Socher explains that their focus is "to build truly recursive, self-improving superintelligence at scale, which means that the entire process of ideation, implementation and validation of research ideas would be automatic"
1
. This goes beyond simply using AI to improve other systems—it's about AI working on itself and developing what Socher describes as "a new kind of sense of self awareness of its own shortcomings"1
.The concept of open-endedness draws inspiration from biological evolution, where animals adapt to environments and others counter-adapt in a process that can evolve for billions of years. In the AI context, this means systems that can co-evolve through millions of iterations. Rocktäschel, who worked on the world model Genie 3 at Google DeepMind, brings specific expertise in this area
1
. The approach involves AI co-evolution where two systems interact—one attempting to identify vulnerabilities while the other strengthens itself, creating an open-ended loop of improvement.The company has outlined a staged development path. The first step involves training a system with capabilities equivalent to "50,000 doctors" to automate AI scientific research itself
2
. From there, Recursive Superintelligence plans to run a "Level 1" autonomous training system, with a public launch targeted for mid-2026. The AI will search for ways to improve itself by carrying out simulations "in an open-ended process of automated scientific discovery," developing experiment ideas, testing them, and validating results3
.Current neural networks cannot perform basic research in a fully autonomous manner, which is why the company's initial priority centers on building an AI model that can improve its own code base
3
. The experiments will focus on improving not only code but also the harness—auxiliary programs that AI providers use to enhance algorithm output—as well as training and inference infrastructure. The company is developing guardrails to prevent risky output as these systems gain more autonomy.Related Stories
Recursive Superintelligence is not alone in this pursuit. Major AI laboratories are already using their own models to accelerate research. Anthropic reports that the majority of its code is now written by Claude, while OpenAI has stated that GPT-5.5 developed a parallelization method that boosted token generation speeds by more than 20%
2
3
. A company that achieves recursive self-improvement first would theoretically extend its lead exponentially, because its development velocity would compound rather than remain linear.The startup currently operates from offices in San Francisco and London with a team that has grown beyond 25 researchers and engineers
2
3
. The round was heavily oversubscribed, and the strategic investment from Nvidia and AMD—the two chipmakers whose hardware underpins virtually all frontier AI training—suggests they view recursive self-improvement not as theoretical but as a near-term compute customer.Socher envisions expanding beyond AI research itself: "We will start with AI research itself but eventually hope to expand its aperture to physics, chemistry and especially pre-clinical biology. AI will be to biology what calculus was to physics—a new language and way of thinking that deals with complex systems and helps us understand and engineer them better"
3
. The concept that an AI system could improve itself in an accelerating loop that eventually outpaces human researchers has been a fixture of computer science since the 1960s. Now, with significant funding and a team built from the architects of current AI systems, that theoretical possibility moves closer to reality.Summarized by
Navi
[2]
27 Jan 2026•Startups

27 Apr 2026•Startups

08 Mar 2025•Technology

1
Technology

2
Technology

3
Policy and Regulation
