Curated by THEOUTPOST
On Sat, 8 Feb, 12:03 AM UTC
6 Sources
[1]
Safe Superintelligence Commands $20B Valuation Without a Single Product
Less than seven months after its founding, Safe Superintelligence (SSI) is reportedly in talks to raise funding at a valuation of $20 billion. The figure is astronomical for a company that hasn't released a single product or published any research papers describing its progress. Instead, the high valuation seems to rest entirely on founder Ilya Sutskever's reputation. OpenAI Alumni Ilya Sutskever Seeking Billions When Sutskever left his role at OpenAI in May 2024, the company lost a prolific AI researcher who was behind some of the most significant machine learning breakthroughs this century. With little more than a roughly-formed idea to develop "safe superintelligence," it took Sutskever just three months to raise a billion dollars for his new AI safety venture. Contributors to that initial funding round included heavyweight tech investors Andreessen Horowitz and Sequoia Capital. According to Reuters, which broke the story on Friday, Feb. 7, both new and existing investors are participating in the latest funding talks. SSI Among Fastest-Growing AI Startup Valuations If SSI hits its target valuation, it will enter the record books as one of the fastest-growing startups ever. For comparison, OpenAI took eight years to hit a valuation of 27 billion -- a feat it achieved with a $300 million funding round in April 2023. In recent years, AI startup valuations have ballooned, with companies like Anthropic and Elon Musk's xAI raising sums that would once have been impossible for such early-stage ventures. xAI is especially notable in terms of the speed with which it has raised capital. Fourteen months after its founding, xAI closed a $6 billion Series B at a post-money price of $24 billion. Six months later, in November 2024, that figure jumped to $50 billion at the startup's next funding round. Product Runway Remains a Mystery Aside from a vague commitment to AI safety, it is difficult to ascertain exactly what SSI plans to develop. The startup has no official social media accounts and no marketing team. Its website is equally sparse, consisting only of a single message from Sutskever and fellow co-founders Daniel Gross and Daniel Lev. Notably, their message rejects the hype and self-aggrandizement that is often characteristic of many other startups. The usual references to "product-market fit" and "ambitious growth plans" are also conspicuously absent. Instead, the SSI founders claim their "singular focus means no distraction by management overhead or product cycles" and that the company is "insulated from short-term commercial pressures." Whatever they are working on, the SSI team (reportedly just ten people in September 2024) must have something to show investors. Otherwise, how did they manage to justify the $20 billion price tag?
[2]
OpenAI cofounder Sutskever's SSI in talks to be valued at $20 billion: Sources
AI startup Safe Superintelligence, cofounded by OpenAI's Ilya Sutskever, seeks funding at a $20 billion valuation, quadrupling its previous worth amid AI industry's challenges.Safe Superintelligence, an artificial intelligence startup co-founded by OpenAI's former chief scientist Ilya Sutskever last year, is in talks to raise funding at a valuation of at least $20 billion, four sources told Reuters. That would quadruple the company's $5 billion valuation from its last funding round in September, when it raised $1 billion from five investors including Sequoia Capital, Andreessen Horowitz, and DST Global. SSI's fundraising tests the ability of high-profile AI ventures to continue to command premium valuations following an industry-wide reappraisal prompted by Chinese startup DeepSeek's unveiling of its low-cost AI last month. SSI, which has not generated any revenue, has said its mission is to develop "safe superintelligence" that is smarter than humans while aligned with human interests. The company's conversations with existing and new investors are still in the early stages and terms could still change, the sources said this week, who requested anonymity to discuss private matters. It was not clear how much money SSI was seeking to raise. SSI, which was founded in June with offices in Palo Alto and Tel Aviv, did not respond to requests for comment. Sutskever's cofounders are Daniel Gross, who previously led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher. Secretive startup Beyond the cursory explanation of the company's goals for safe AI, not much is known about the secretive startup or its work. What has fueled interest among investors is Sutskever's reputation and the novel approach he has said his team is working on. In AI circles, he is a legend for his contributions to breakthroughs that underpin the investment frenzy in generative AI. He was an early advocate of scaling, which means dedicating vast amounts of computing power and data to refining AI models. That concept was the foundation that led to generative AI advances like OpenAI's ChatGPT, setting the course for a wave of tens of billions of dollars in investment in chips, data centers and energy. Sutskever was also early in seeing the potential ceiling of such an approach due to the dwindling pool of available data to train models. Recognizing the importance of putting in resources in the inference stage, or the stage of AI when a trained model draws conclusions, he founded the team that worked on what would become OpenAI's latest series of reasoning models, setting a new research direction that has been widely followed. Making clear to investors not to expect short-term windfalls, SSI has said it intends to "scale in peace" by insulating its progress from short-term commercial pressures. This sets it apart from other AI labs, including OpenAI which started as a nonprofit but shifted focus to commercial products after ChatGPT unexpectedly took off in 2022. It generated nearly $4 billion in revenue last year and forecast $11.6 billion in revenue this year. Little is publicly known about SSI's approach. In a Reuters interview last year Sutskever, 38, said SSI was pursuing a new research direction, calling it "a new mountain to climb", but shared few other details. Fundraising for the so-called foundation model companies shown no signs of slowing down. OpenAI is in talks to double its valuation to $300 billion, while rival Anthropic is finalizing a funding round that would value it at $60 billion. Still, investors face fresh questions about their outsized bet with the disruption from Chinese startup DeepSeek, which developed open-source models that rivaled the top US AI models at a fraction of the cost. The popularity of DeepSeek knocked nearly $600 billion off Nvidia's market capitalization in late January. But it has not deterred big tech from plowing ever higher investment in their AI infrastructures this year, according to recent earnings statements.
[3]
Exclusive-OpenAI co-founder Sutskever's SSI in talks to be valued at $20 billion, sources say
(Reuters) - Safe Superintelligence, an artificial intelligence startup co-founded by OpenAI's former chief scientist Ilya Sutskever last year, is in talks to raise funding at a valuation of at least $20 billion, four sources told Reuters. That would quadruple the company's $5 billion valuation from its last funding round in September, when it raised $1 billion from five investors including Sequoia Capital, Andreessen Horowitz, and DST Global. SSI's fundraising tests the ability of high-profile AI ventures to continue to command premium valuations following an industry-wide reappraisal prompted by Chinese startup DeepSeek's unveiling of its low-cost AI last month. SSI, which has not generated any revenue, has said its mission is to develop "safe superintelligence" that is smarter than humans while aligned with human interests. The company's conversations with existing and new investors are still in the early stages and terms could still change, the sources said this week, who requested anonymity to discuss private matters. It was not clear how much money SSI was seeking to raise. SSI, which was founded in June with offices in Palo Alto and Tel Aviv, did not respond to requests for comment. Sutskever's co-founders are Daniel Gross, who previously led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher. SECRETIVE STARTUP Beyond the cursory explanation of the company's goals for safe AI, not much is known about the secretive startup or its work. What has fueled interest among investors is Sutskever's reputation and the novel approach he has said his team is working on. In AI circles, he is a legend for his contributions to breakthroughs that underpin the investment frenzy in generative AI. He was an early advocate of scaling, which means dedicating vast amounts of computing power and data to refining AI models. That concept was the foundation that led to generative AI advances like OpenAI's ChatGPT, setting the course for a wave of tens of billions of dollars in investment in chips, data centers and energy. Sutskever was also early in seeing the potential ceiling of such an approach due to the dwindling pool of available data to train models. Recognizing the importance of putting in resources in the inference stage, or the stage of AI when a trained model draws conclusions, he founded the team that worked on what would become OpenAI's latest series of reasoning models, setting a new research direction that has been widely followed. Making clear to investors not to expect short-term windfalls, SSI has said it intends to "scale in peace" by insulating its progress from short-term commercial pressures. This sets it apart from other AI labs, including OpenAI which started as a nonprofit but shifted focus to commercial products after ChatGPT unexpectedly took off in 2022. It generated nearly $4 billion in revenue last year and forecast $11.6 billion in revenue this year. Little is publicly known about SSI's approach. In a Reuters interview last year Sutskever, 38, said SSI was pursuing a new research direction, calling it "a new mountain to climb", but shared few other details. Fundraising for the so-called foundation model companies shown no signs of slowing down. OpenAI is in talks to double its valuation to $300 billion, while rival Anthropic is finalizing a funding round that would value it at $60 billion. Still, investors face fresh questions about their outsized bet with the disruption from Chinese startup DeepSeek, which developed open-source models that rivaled the top U.S. AI models at a fraction of the cost. The popularity of DeepSeek knocked nearly $600 billion off Nvidia's market capitalization in late January. But it has not deterred big tech from plowing ever higher investment in their AI infrastructures this year, according to recent earnings statements. (Reporting by Krystal Hu in New York, Kenrick Cai and Anna Tong in San Francisco; editing by Kenneth Li and Nia Williams)
[4]
Ilya Sutskever's SSI reportedly raising new funding at $20B+ valuation - SiliconANGLE
Ilya Sutskever's SSI reportedly raising new funding at $20B+ valuation Safe Superintelligence Inc., a high-profile artificial intelligence startup, is reportedly seeking to raise new capital at a valuation of at least $20 billion. Reuters today cited sources as saying that the company is in talks with both new and existing investors. SSI's existing investors include Sequoia, DST Global, Andreessen Horowitz, SV Angel and NFDG. The consortium led a $1 billion round for the company last September at a reported valuation of $5 billion. That SSI is now gearing up to raise capital at a valuation at least four times as high suggests investors are optimistic about its technology. The company launched last June to develop AI models that possess "superintelligence" as well as guardrails for blocking harmful output. Its founding team included former OpenAI chief scientist Ilya Sutskever, AI researcher Daniel Levy and Daniel Gross, a one-time Y Combinator partner. Gross, the company's Chief Executive Officer, also led Apple Inc.'s AI development efforts for several years. Sutskever has stated that SSI "will not do anything else" besides developing AI models with superintelligence, which suggests it won't generate revenue in the near term. Few details are available about how SSI plans to go about building such models. According to Reuters, Sutskever has indicated that the company will follow a "new research direction" instead than using existing AI development methods. Developers increase large language models' output quality by boosting their parameter counts, as well as the amount of hardware and training data at their disposal. In a recent talk at the NeurIPS machine learning conference, Sutskever suggested that this approach is nearing its limits. He argued that developers are struggling to source the growing amounts of high-quality training data necessary to keep enhancing LLMs' output quality. "We've achieved peak data and there'll be no more," Sutskever said. "We have to deal with the data that we have. There's only one internet." In 2012, Sutskever was part of the academic team that created AlexNet, one of the first modern computer vision models. The algorithm inspired a significant amount of deep learning research that helped lay the foundation for large language models. After the project, Sutskever worked at Google LLC for several years before co-founding OpenAI in 2015. Sutskever was the ChatGPT developer's chief scientist until last year. He worked on, among other projects, OpenAI's series of reasoning-optimized LLMs. Reasoning models were another focus of his recent NeurIPS keynote. "A system that reasons, the more it reasons, the more unpredictable it becomes," Sutskever told attendees. "And one reason to see that is because the chess AIs, the really good ones, are unpredictable to the best human chess players." SSI is not the only startup taking a new approach to building AI models. In December, Liquid AI Inc. raised $250 million from investors to develop so-called liquid neural networks. The startup says that such algorithms can match the output quality of cutting-edge LLMs using a fraction of the hardware. An AI model is made of artificial neurons, simple programs that each perform a small portion of the processing involved in generating a prompt response. Each neuron, in turn, includes components called weights and activation functions. Weights determine which pieces of data an AI model takes into account when making decisions. Activation functions contain the code that the neuron uses to analyze this data. Standard LLMs' weights and activation functions don't change after training. Liquid neural networks, in contrast, can reconfigure those components during inference. That allows such algorithms to continuously adapt the way they perform processing based on the data they ingest.
[5]
Report: Ilya Sutskever's startup in talks to fundraise at roughly $20B valuation | TechCrunch
Safe Superintelligence, the AI startup founded by former OpenAI chief scientist Ilya Sutskever, is in talks to raise funding at a valuation of "at least" $20 billion, according to Reuters. It's not clear how much Safe Superintelligence -- which has yet to generate any revenue -- is looking to secure, but it could be substantial. The new figure is 4x the $5 billion valuation the company held last September. Little is known about Safe Superintelligence's work. The company, which also counts ex-OpenAI researcher Daniel Levy and former Apple AI projects lead Daniel Gross among its founding team, has raised $1 billion so far. Existing investors include Sequoia Capital, Andreessen Horowitz, and DST Global. Sutskever is widely respected in the AI -- and wider tech -- industry. He's credited with contributing to major AI breakthroughs while at OpenAI, including the technical approach that made ChatGPT's development possible.
[6]
Report: AI Startup Safe Superintelligence Aims to Quadruple Valuation to $20 Billion | PYMNTS.com
Artificial intelligence (AI) startup Safe Superintelligence is reportedly in talks for a funding round that would quadruple its valuation to $20 billion. The company, which was co-founded in June by former OpenAI Chief Scientist Ilya Sutskever, was valued at $5 billion in a September round in which it raised $1 billion, Reuters reported Friday (Feb. 7), citing unnamed sources. The talks around a new funding round are in their early stages and the details could change, according to the report. It's not clear how much money the company is looking to raise, the report said. Safe Superintelligence did not immediately reply to PYMNTS' request for comment. Sutskever announced the launch of the AI startup in June, a month after stepping down from OpenAI. Safe Superintelligence said at the time in a social media post that the company approaches "safety and capabilities in tandem." "We plan to advance capabilities as fast as possible while making sure our safety always remains ahead," the company wrote. "This way, we can scale in peace. Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security and progress are all insulated from short-term commercial pressures." Many people within the AI community interpreted Sutskever's decision to start Safe Superintelligence as a response to what he perceived as a shift in OpenAI's focus, from its original mission of developing safe and beneficial AGI to a more commercial focus, PYMNTS reported at the time. When Safe Superintelligence raised $1 billion in its September funding round, management said the company planned to use the funds to boost its computing power and hire talent. Investors in the round included Andreessen Horowitz, Sequoia and NFDG, an investment partnership run in part by Safe Superintelligence CEO Daniel Gross. "It's important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross told Reuters at the time.
Share
Share
Copy Link
Safe Superintelligence (SSI), an AI startup co-founded by former OpenAI chief scientist Ilya Sutskever, is in talks to raise funding at a $20 billion valuation. The company, which has no product or revenue, is banking on Sutskever's reputation and a novel approach to AI development.
Safe Superintelligence (SSI), an artificial intelligence startup co-founded by former OpenAI chief scientist Ilya Sutskever, is reportedly in talks to raise funding at a valuation of at least $20 billion 1. This potential valuation comes less than seven months after the company's founding and would quadruple its previous $5 billion valuation from September 2024 2.
What makes SSI's valuation particularly noteworthy is that the company has yet to release a single product or publish any research papers describing its progress 1. The startup, which was founded in June 2024 with offices in Palo Alto and Tel Aviv, has not generated any revenue 3. Instead, its high valuation appears to rest entirely on the reputation of its founders, particularly Ilya Sutskever.
Sutskever, widely respected in AI circles, left his role at OpenAI in May 2024. He is joined by co-founders Daniel Gross, who previously led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher 2. The team, reportedly consisting of just ten people as of September 2024, has managed to attract significant investor interest despite the lack of a tangible product 1.
SSI's stated mission is to develop "safe superintelligence" that is smarter than humans while aligned with human interests 3. The company has emphasized its intention to "scale in peace" by insulating its progress from short-term commercial pressures 2. This approach sets SSI apart from other AI labs that have shifted focus to commercial products.
The company's initial funding round in September 2024 raised $1 billion from investors including Sequoia Capital, Andreessen Horowitz, and DST Global 4. The current funding talks involve both existing and new investors, although the exact amount being sought is unclear 3.
SSI's fundraising efforts come at a time when the AI industry is facing new challenges. The recent unveiling of low-cost AI by Chinese startup DeepSeek has prompted an industry-wide reappraisal of valuations 2. However, fundraising for foundation model companies continues to show no signs of slowing down, with OpenAI reportedly in talks to double its valuation to $300 billion and Anthropic finalizing a funding round at a $60 billion valuation 3.
In AI circles, Sutskever is known for his contributions to breakthroughs in generative AI, including his early advocacy for scaling - dedicating vast amounts of computing power and data to refining AI models 2. However, he has also recognized the limitations of this approach due to the dwindling pool of available training data. In a recent interview, Sutskever mentioned that SSI is pursuing a "new research direction," calling it "a new mountain to climb," though details remain scarce 3.
The potential $20 billion valuation of SSI, despite its lack of product or revenue, highlights the continued investor enthusiasm for AI startups, particularly those led by high-profile figures in the field. It also underscores the growing focus on AI safety and the development of superintelligent systems that are aligned with human interests 5. As the AI landscape continues to evolve rapidly, SSI's progress and eventual product reveal will be closely watched by industry observers and competitors alike.
Reference
[3]
[4]
Safe Superintelligence, founded by former OpenAI chief scientist Ilya Sutskever, is reportedly close to raising over $1 billion at a $30 billion valuation, despite having no product. The startup aims to build safe superintelligent AI.
8 Sources
8 Sources
Ilya Sutskever, co-founder of OpenAI, launches a new AI safety startup called Scaling Safety Inc. (SSI), securing $1 billion in funding. The company aims to address AI safety concerns and develop advanced AI systems.
20 Sources
20 Sources
OpenAI, the artificial intelligence company behind ChatGPT, is reportedly in discussions for a new funding round that could value the company at $150 billion. This move comes as the AI race intensifies and development costs soar.
19 Sources
19 Sources
OpenAI, after securing a $6.6 billion investment, has asked its investors to refrain from funding five AI companies it considers close competitors. This move highlights the intensifying competition in the AI industry and OpenAI's efforts to maintain its market position.
8 Sources
8 Sources
OpenAI, the creator of ChatGPT, is reportedly in discussions for a new funding round that could value the company at more than $100 billion. This development marks a significant milestone in the AI industry and could reshape the tech landscape.
17 Sources
17 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved