Curated by THEOUTPOST
On Wed, 4 Sept, 4:11 PM UTC
20 Sources
[1]
OpenAI cofounder Sutskever's new safety-focused AI startup SSI raises $1 billion
The funding underlines how some investors are still willing to make outsized bets on exceptional talent focused on foundational AI research. That's despite a general waning in interest towards funding such companies which can be unprofitable for some time, and which has caused several startup founders to leave their posts for tech giants.Safe Superintelligence (SSI), newly co-founded by OpenAI's former chief scientist Ilya Sutskever, has raised $1 billion in cash to help develop safe artificial intelligence systems that far surpass human capabilities, company executives told Reuters. SSI, which currently has 10 employees, plans to use the funds to acquire computing power and hire top talent. It will focus on building a small highly trusted team of researchers and engineers split between Palo Alto, California and Tel Aviv, Israel. The company declined to share its valuation but sources close to the matter said it was valued at $5 billion. The funding underlines how some investors are still willing to make outsized bets on exceptional talent focused on foundational AI research. That's despite a general waning in interest towards funding such companies which can be unprofitable for some time, and which has caused several startup founders to leave their posts for tech giants. Investors included top venture capital firms Andreessen Horowitz, Sequoia Capital, DST Global and SV Angel. NFDG, an investment partnership run by Nat Friedman and SSI's Chief Executive Daniel Gross, also participated. "It's important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross said in an interview. AI safety, which refers to preventing AI from causing harm, is a hot topic amid fears that rogue AI could act against the interests of humanity or even cause human extinction. A California bill seeking to impose safety regulations on companies has split the industry. It is opposed by companies like OpenAI and Google, and supported by Anthropic and Elon Musk's xAI. Sutskever, 37, is one of the most influential technologists in AI. He co-founded SSI in June with Gross, who previously led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher. Sutskever is chief scientist and Levy is principal scientist, while Gross is responsible for computing power and fundraising. New mountain Sutskever said his new venture made sense because he "identified a mountain that's a bit different from what I was working on." Last year, he was a part of the board of OpenAI's non-profit parent which voted to oust OpenAI CEO Sam Altman over a "breakdown of communications." Within days, he reversed his decision and joined nearly all of OpenAI's employees in signing a letter demanding Altman's return and the board's resignation. But the turn of events diminished his role at OpenAI. He was removed from the board and left the company in May. After Sutskever's departure, the company dismantled his "Superalignment" team, which worked to ensure AI stays aligned with human values to prepare for a day when AI exceeds human intelligence. Unlike OpenAI's unorthodox corporate structure, implemented for AI safety reasons but which made Altman's ouster possible, SSI has a regular for-profit structure. SSI is currently very much focused on hiring people who will fit in with its culture. Gross said they spend hours vetting if candidates have "good character", and are looking for people with extraordinary capabilities rather than overemphasizing credentials and experience in the field. "One thing that excites us is when you find people that are interested in the work, that are not interested in the scene, in the hype," he added. SSI says it plans to partner with cloud providers and chip companies to fund its computing power needs but hasn't yet decided which firms it will work with. AI startups often work with companies such as Microsoft and Nvidia to address their infrastructure needs. Sutskever was an early advocate of scaling, a hypothesis that AI models would improve in performance given vast amounts of computing power. The idea and its execution kicked off a wave of AI investment in chips, data centers and energy, laying the groundwork for generative AI advances like ChatGPT. Sutskever said he will approach scaling in a different way than his former employer, without sharing details. "Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?" he said. "Some people can work really long hours and they'll just go down the same path faster. It's not so much our style. But if you do something different, then it becomes possible for you to do something special."
[2]
Exclusive-OpenAI Co-Founder Sutskever's New Safety-Focused AI Startup SSI Raises $1 Billion
SAN FRANCISCO/NEW YORK - Safe Superintelligence (SSI), newly co-founded by OpenAI's former chief scientist Ilya Sutskever, has raised $1 billion in cash to help develop safe artificial intelligence systems that far surpass human capabilities, company executives told Reuters. SSI, which currently has 10 employees, plans to use the funds to acquire computing power and hire top talent. It will focus on building a small highly trusted team of researchers and engineers split between Palo Alto, California and Tel Aviv, Israel. The company declined to share its valuation but sources close to the matter said it was valued at $5 billion.The funding underlines how some investors are still willing to make outsized bets on exceptional talent focused on foundational AI research. That's despite a general waning in interest towards funding such companies which can be unprofitable for some time, and which has caused several startup founders to leave their posts for tech giants. Investors included top venture capital firms Andreessen Horowitz, Sequoia Capital, DST Global and SV Angel. NFDG, an investment partnership run by Nat Friedman and SSI's Chief Executive Daniel Gross, also participated. "It's important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross said in an interview. AI safety, which refers to preventing AI from causing harm, is a hot topic amid fears that rogue AI could act against the interests of humanity or even cause human extinction. A California bill seeking to impose safety regulations on companies has split the industry. It is opposed by companies like OpenAI and Google, and supported by Anthropic and Elon Musk's xAI. Sutskever, 37, is one of the most influential technologists in AI. He co-founded SSI in June with Gross, who previously led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher. Sutskever is chief scientist and Levy is principal scientist, while Gross is responsible for computing power and fundraising. NEW MOUNTAIN Sutskever said his new venture made sense because he "identified a mountain that's a bit different from what I was working on." Last year, he was a part of the board of OpenAI's non-profit parent which voted to oust OpenAI CEO Sam Altman over a "breakdown of communications." Within days, he reversed his decision and joined nearly all of OpenAI's employees in signing a letter demanding Altman's return and the board's resignation. But the turn of events diminished his role at OpenAI. He was removed from the board and left the company in May. After Sutskever's departure, the company dismantled his "Superalignment" team, which worked to ensure AI stays aligned with human values to prepare for a day when AI exceeds human intelligence. Unlike OpenAI's unorthodox corporate structure, implemented for AI safety reasons but which made Altman's ouster possible, SSI has a regular for-profit structure. SSI is currently very much focused on hiring people who will fit in with its culture. Gross said they spend hours vetting if candidates have "good character", and are looking for people with extraordinary capabilities rather than overemphasizing credentials and experience in the field. "One thing that excites us is when you find people that are interested in the work, that are not interested in the scene, in the hype," he added. SSI says it plans to partner with cloud providers and chip companies to fund its computing power needs but hasn't yet decided which firms it will work with. AI startups often work with companies such as Microsoft and Nvidia to address their infrastructure needs. Sutskever was an early advocate of scaling, a hypothesis that AI models would improve in performance given vast amounts of computing power. The idea and its execution kicked off a wave of AI investment in chips, data centers and energy, laying the groundwork for generative AI advances like ChatGPT. Sutskever said he will approach scaling in a different way than his former employer, without sharing details. "Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?" he said. "Some people can work really long hours and they'll just go down the same path faster. It's not so much our style. But if you do something different, then it becomes possible for you to do something special." (Reporting by Kenrick Cai and Anna Tong in San Francisco and Krystal Hu in New York; Additional reporting by Jeffrey Dastin; Editing by Kenneth Li and Edwina Gibbs)
[3]
OpenAI co-founder Sutskever's new safety-focused AI startup SSI raises $1 billion
Safe Superintelligence (SSI), newly co-founded by OpenAI's former chief scientist Ilya Sutskever, has raised $1 billion in cash to help develop safe artificial intelligence systems that far surpass human capabilities, company executives told Reuters. SSI, which currently has 10 employees, plans to use the funds to acquire computing power and hire top talent. It will focus on building a small, highly trusted team of researchers and engineers split between Palo Alto, California and Tel Aviv, Israel. The company declined to share its valuation but sources close to the matter said it was valued at $5 billion. The funding underlines how some investors are still willing to make outsized bets on exceptional talent focused on foundational AI research. That's despite a general waning in interest towards funding such companies which can be unprofitable for some time, and which has caused several startup founders to leave their posts for tech giants. How the cracks in OpenAI's foundation reignited mistrust in Sam Altman Investors included top venture capital firms Andreessen Horowitz, Sequoia Capital, DST Global and SV Angel. NFDG, an investment partnership run by Nat Friedman and SSI's Chief Executive Daniel Gross, also participated. "It's important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross said in an interview. AI safety, which refers to preventing AI from causing harm, is a hot topic amid fears that rogue AI could act against the interests of humanity or even cause human extinction. A California bill seeking to impose safety regulations on companies has split the industry. It is opposed by companies like OpenAI and Google, and supported by Anthropic and Elon Musk's xAI. Sutskever, 37, is one of the most influential technologists in AI. He co-founded SSI in June with Gross, who previously led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher. Sutskever is chief scientist and Levy is principal scientist, while Gross is responsible for computing power and fundraising. New mountain Sutskever said his new venture made sense because he "identified a mountain that's a bit different from what I was working on." Last year, he was a part of the board of OpenAI's non-profit parent which voted to oust OpenAI CEO Sam Altman over a "breakdown of communications." Within days, he reversed his decision and joined nearly all of OpenAI's employees in signing a letter demanding Altman's return and the board's resignation. But the turn of events diminished his role at OpenAI. He was removed from the board and left the company in May. After Sutskever's departure, the company dismantled his "Superalignment" team, which worked to ensure AI stays aligned with human values to prepare for a day when AI exceeds human intelligence. Unlike OpenAI's unorthodox corporate structure, implemented for AI safety reasons but which made Altman's ouster possible, SSI has a regular for-profit structure. SSI is currently very much focused on hiring people who will fit in with its culture. OpenAI sets up safety committee as it starts training new model Gross said they spend hours vetting if candidates have "good character", and are looking for people with extraordinary capabilities rather than overemphasising credentials and experience in the field. "One thing that excites us is when you find people that are interested in the work, that are not interested in the scene, in the hype," he added. SSI says it plans to partner with cloud providers and chip companies to fund its computing power needs but hasn't yet decided which firms it will work with. AI startups often work with companies such as Microsoft and Nvidia to address their infrastructure needs. Sutskever was an early advocate of scaling, a hypothesis that AI models would improve in performance given vast amounts of computing power. The idea and its execution kicked off a wave of AI investment in chips, data centres and energy, laying the groundwork for generative AI advances like ChatGPT. Sutskever said he will approach scaling in a different way than his former employer, without sharing details. "Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?" he said. "Some people can work really long hours and they'll just go down the same path faster. It's not so much our style. But if you do something different, then it becomes possible for you to do something special." Published - September 05, 2024 08:21 am IST Read Comments
[4]
Exclusive-OpenAI co-founder Sutskever's new safety-focused AI startup SSI raises $1 billion
SAN FRANCISCO/NEW YORK - Safe Superintelligence (SSI), newly co-founded by OpenAI's former chief scientist Ilya Sutskever, has raised $1 billion in cash to help develop safe artificial intelligence systems that far surpass human capabilities, company executives told Reuters. SSI, which currently has 10 employees, plans to use the funds to acquire computing power and hire top talent. It will focus on building a small highly trusted team of researchers and engineers split between Palo Alto, California and Tel Aviv, Israel. The company declined to share its valuation but sources close to the matter said it was valued at $5 billion.The funding underlines how some investors are still willing to make outsized bets on exceptional talent focused on foundational AI research. That's despite a general waning in interest towards funding such companies which can be unprofitable for some time, and which has caused several startup founders to leave their posts for tech giants. Investors included top venture capital firms Andreessen Horowitz, Sequoia Capital, DST Global and SV Angel. NFDG, an investment partnership run by Nat Friedman and SSI's Chief Executive Daniel Gross, also participated. "It's important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross said in an interview. AI safety, which refers to preventing AI from causing harm, is a hot topic amid fears that rogue AI could act against the interests of humanity or even cause human extinction. A California bill seeking to impose safety regulations on companies has split the industry. It is opposed by companies like OpenAI and Google, and supported by Anthropic and Elon Musk's xAI. Sutskever, 37, is one of the most influential technologists in AI. He co-founded SSI in June with Gross, who previously led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher. Sutskever is chief scientist and Levy is principal scientist, while Gross is responsible for computing power and fundraising. NEW MOUNTAIN Sutskever said his new venture made sense because he "identified a mountain that's a bit different from what I was working on." Last year, he was a part of the board of OpenAI's non-profit parent which voted to oust OpenAI CEO Sam Altman over a "breakdown of communications." Within days, he reversed his decision and joined nearly all of OpenAI's employees in signing a letter demanding Altman's return and the board's resignation. But the turn of events diminished his role at OpenAI. He was removed from the board and left the company in May. After Sutskever's departure, the company dismantled his "Superalignment" team, which worked to ensure AI stays aligned with human values to prepare for a day when AI exceeds human intelligence. Unlike OpenAI's unorthodox corporate structure, implemented for AI safety reasons but which made Altman's ouster possible, SSI has a regular for-profit structure. SSI is currently very much focused on hiring people who will fit in with its culture. Gross said they spend hours vetting if candidates have "good character", and are looking for people with extraordinary capabilities rather than overemphasizing credentials and experience in the field. "One thing that excites us is when you find people that are interested in the work, that are not interested in the scene, in the hype," he added. SSI says it plans to partner with cloud providers and chip companies to fund its computing power needs but hasn't yet decided which firms it will work with. AI startups often work with companies such as Microsoft and Nvidia to address their infrastructure needs. Sutskever was an early advocate of scaling, a hypothesis that AI models would improve in performance given vast amounts of computing power. The idea and its execution kicked off a wave of AI investment in chips, data centers and energy, laying the groundwork for generative AI advances like ChatGPT. Sutskever said he will approach scaling in a different way than his former employer, without sharing details. "Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?" he said. "Some people can work really long hours and they'll just go down the same path faster. It's not so much our style. But if you do something different, then it becomes possible for you to do something special." (Reporting by Kenrick Cai and Anna Tong in San Francisco and Krystal Hu in New York; Additional reporting by Jeffrey Dastin; Editing by Kenneth Li and Edwina Gibbs)
[5]
OpenAI co-founder Ilya Sutskever's new safety-focused AI startup SSI...
Safe Superintelligence (SSI), newly co-founded by OpenAI's former chief scientist Ilya Sutskever, has raised $1 billion in cash to help develop safe artificial intelligence systems that far surpass human capabilities, company executives told Reuters. SSI, which currently has 10 employees, plans to use the funds to acquire computing power and hire top talent. It will focus on building a small highly trusted team of researchers and engineers split between Palo Alto, California and Tel Aviv, Israel. The company declined to share its valuation but sources close to the matter said it was valued at $5 billion. The funding underlines how some investors are still willing to make outsized bets on exceptional talent focused on foundational AI research. That's despite a general waning in interest towards funding such companies which can be unprofitable for some time, and which has caused several startup founders to leave their posts for tech giants. Investors included top venture capital firms Andreessen Horowitz, Sequoia Capital, DST Global and SV Angel. NFDG, an investment partnership run by Nat Friedman and SSI's Chief Executive Daniel Gross, also participated. "It's important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross said in an interview. AI safety, which refers to preventing AI from causing harm, is a hot topic amid fears that rogue AI could act against the interests of humanity or even cause human extinction. A California bill seeking to impose safety regulations on companies has split the industry. It is opposed by companies like OpenAI and Google and supported by Anthropic and Elon Musk's xAI. Sutskever, 37, is one of the most influential technologists in AI. He co-founded SSI in June with Gross, who previously led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher. Sutskever is chief scientist and Levy is principal scientist, while Gross is responsible for computing power and fundraising. Sutskever said his new venture made sense because he "identified a mountain that's a bit different from what I was working on." Last year, he was a part of the board of OpenAI's non-profit parent which voted to oust OpenAI CEO Sam Altman over a "breakdown of communications." Within days, he reversed his decision and joined nearly all of OpenAI's employees in signing a letter demanding Altman's return and the board's resignation. But the turn of events diminished his role at OpenAI. He was removed from the board and left the company in May. After Sutskever's departure, the company dismantled his "Superalignment" team, which worked to ensure AI stays aligned with human values to prepare for a day when AI exceeds human intelligence. Unlike OpenAI's unorthodox corporate structure, implemented for AI safety reasons but which made Altman's ouster possible, SSI has a regular for-profit structure. SSI is currently very much focused on hiring people who will fit in with its culture. Gross said they spend hours vetting if candidates have "good character", and are looking for people with extraordinary capabilities rather than overemphasizing credentials and experience in the field. "One thing that excites us is when you find people that are interested in the work, that are not interested in the scene, in the hype," he added. SSI says it plans to partner with cloud providers and chip companies to fund its computing power needs but hasn't yet decided which firms it will work with. AI startups often work with companies such as Microsoft and Nvidia to address their infrastructure needs. Sutskever was an early advocate of scaling, a hypothesis that AI models would improve in performance given vast amounts of computing power. The idea and its execution kicked off a wave of AI investment in chips, data centers and energy, laying the groundwork for generative AI advances like ChatGPT. Sutskever said he will approach scaling in a different way than his former employer, without sharing details. "Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?" he said. "Some people can work really long hours and they'll just go down the same path faster. It's not so much our style. But if you do something different, then it becomes possible for you to do something special."
[6]
Ex-OpenAI Chief Scientist Ilya Sutskever's 10-Employee Startup Has Raised $1B
Safe Superintelligence (SSI), a three-month-old startup, is valued at a staggering $5 billion. Safe Superintelligence (SSI), an A.I. startup launched by OpenAI's former chief scientist Ilya Sutskever in June, has already raised a staggering $1 billion in venture funding despite having only ten employees, the company announced on its website yesterday (Sept. 4). SSI was co-founded by Sutskever, Daniel Gross, a former Y Combinator partner who previously led A.I. efforts at Apple, and Daniel Levy, who worked alongside Sutskever at OpenAI. The startup is currently developing artificial general intelligence, or AGI, while retaining a focus on safety. Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters NFDG, a venture capital firm run by Gross and Nat Friedman, participated in SSI's fundraising alongside Andreessen Horowitz, Sequoia Capital, DST Global and SV Angel. The startup is now valued at $5 billion, according to Reuters, which cited sources familiar with the matter. The funds will be partially earmarked for hiring at SSI, which says it is "assembling a lean, cracked team of the world's best engineers and researchers dedicated to focusing on SSI and nothing else." The company is looking to fill positions across data, hardware, machine learning and systems, according to SSI's job application form. SSI will emphasize "good character" and extraordinary abilities over experience and credentials, Gross told Reuters, adding that the startup plans to spend the next few years devoted to research and development. The company currently has only ten employees spread between its two offices in Palo Alto, Calif. and Tel Aviv, Israel. SSI's investments will also go towards building up computing power, although the company has yet to partner with any particular cloud providers or chipmakers. Sutskever told Reuters that the startup's approach to scaling will differ from that of OpenAI but did not specify how. He also noted that, although SSI will not open-source its primary work yet, there will hopefully "be many opportunities to open-source relevant superintelligence safety work." Sutskever, 37, joined Google in 2013 after completing a Ph.D. under the A.I. academic Geoffrey Hinton at the University of Toronto. He went on to co-found OpenAI in 2015 and served as its chief scientist until May of this year. A safety team co-led by Sutskever that oversaw A.I.'s existential risks was disbanded shortly after his departure. Sutskever was a key member of the four-person OpenAI board that briefly ousted the company's CEO, Sam Altman, last year before the executive was reinstated following pushback from investors and employees. Sutskever said he regretted his involvement in the firing and was subsequently pushed out of the OpenAI board. The ousting was made possible due to OpenAI's unique corporate structure: originally founded as a nonprofit, the company is overseen by an independent nonprofit board and has a capped-profit arm. SSI, meanwhile, has a traditional for-profit structure. Despite his previous struggles within OpenAI, Sutskever told Reuters he has "a very high opinion about the industry" and safety efforts of other A.I. companies. "I think that as people continue to make progress, all the different companies will realize -- maybe at slightly different times -- the nature of the challenge that they're facing."
[7]
Open AI co-founder's new startup raises $1 billion it will focus on ... - Times of India
Safe Superintelligence (SSI), a new AI startup co-founded by Ilya Sutskever, has secured $1 billion in funding to push the boundaries of artificial intelligence. Sutskever, a former chief scientist at OpenAI, aims to create AI systems far beyond human capabilities while ensuring their safety. With a focus on responsible AI development, SSI is set to take a different approach from its predecessors. The startup, with just 10 employees, will use the funds to acquire high-performance computing resources and recruit top talent.Based in Palo Alto, California, and Tel Aviv, Israel, the company aims to build a small, highly trusted team of researchers and engineers. Although SSI has not revealed its valuation, sources suggest the company is already valued at $5 billion. This signals that despite a slowdown in interest for AI startups, investors are still willing to make bold bets on firms with exceptional talent and innovative goals. The list of investors backing SSI includes prominent venture capital firms such as Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. Additional funding came from NFDG, a partnership led by Nat Friedman and SSI's CEO Daniel Gross. Gross emphasized the importance of working with investors who support their long-term vision of safe AI. "We want to spend the next few years on R&D before taking our product to market," Gross said. He also highlighted the growing focus on AI safety, as concerns rise about the potential risks of AI systems acting against human interests. Sutskever's departure from OpenAI earlier this year followed a series of internal conflicts. He was part of the board that controversially voted to remove OpenAI CEO Sam Altman but later reversed his stance. After leaving OpenAI, Sutskever co-founded SSI, where he hopes to explore new directions in AI development. Unlike OpenAI, which operates with an unconventional structure to prioritize AI safety, SSI has adopted a more traditional for-profit model. The company is currently focused on assembling a team aligned with its values and goals. Gross noted they prioritize candidates' character and their genuine interest in the work over industry experience. SSI is also exploring partnerships with cloud providers and chip manufacturers to meet its computing power needs. While Sutskever remains tight-lipped about the details, he hinted that SSI will approach AI scaling differently from his former employer, OpenAI. At TOI World Desk, our dedicated team of seasoned journalists and passionate writers tirelessly sifts through the vast tapestry of global events to bring you the latest news and diverse perspectives round the clock. With an unwavering commitment to accuracy, depth, and timeliness, we strive to keep you informed about the ever-evolving world, delivering a nuanced understanding of international affairs to our readers. Join us on a journey across continents as we unravel the stories that shape our interconnected world.
[8]
OpenAI co-founder Sutskever's new safety-focused AI startup SSI raises $1 billion
The company declined to share its valuation but sources close to the matter said it was valued at $5 billion. The funding underlines how some investors are still willing to make outsized bets on exceptional talent focused on foundational AI research. That's despite a general waning in interest towards funding such companies which can be unprofitable for some time, and which has caused several startup founders to leave their posts for tech giants. Investors included top venture capital firms Andreessen Horowitz, Sequoia Capital, DST Global and SV Angel. NFDG, an investment partnership run by Nat Friedman and SSI's Chief Executive Daniel Gross, also participated. "It's important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross said in an interview. AI safety, which refers to preventing AI from causing harm, is a hot topic amid fears that rogue AI could act against the interests of humanity or even cause human extinction. A California bill seeking to impose safety regulations on companies has split the industry. It is opposed by companies like OpenAI and Google, and supported by Anthropic and Elon Musk's xAI. Sutskever, 37, is one of the most influential technologists in AI. He co-founded SSI in June with Gross, who previously led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher. Sutskever is chief scientist and Levy is principal scientist, while Gross is responsible for computing power and fundraising. Sutskever said his new venture made sense because he "identified a mountain that's a bit different from what I was working on." Last year, he was a part of the board of OpenAI's non-profit parent which voted to oust OpenAI CEO Sam Altman over a "breakdown of communications." Within days, he reversed his decision and joined nearly all of OpenAI's employees in signing a letter demanding Altman's return and the board's resignation. But the turn of events diminished his role at OpenAI. He was removed from the board and left the company in May. After Sutskever's departure, the company dismantled his "Superalignment" team, which worked to ensure AI stays aligned with human values to prepare for a day when AI exceeds human intelligence. Unlike OpenAI's unorthodox corporate structure, implemented for AI safety reasons but which made Altman's ouster possible, SSI has a regular for-profit structure. SSI is currently very much focused on hiring people who will fit in with its culture. Gross said they spend hours vetting if candidates have "good character", and are looking for people with extraordinary capabilities rather than overemphasizing credentials and experience in the field. "One thing that excites us is when you find people that are interested in the work, that are not interested in the scene, in the hype," he added. SSI says it plans to partner with cloud providers and chip companies to fund its computing power needs but hasn't yet decided which firms it will work with. AI startups often work with companies such as Microsoft and Nvidia to address their infrastructure needs. Sutskever was an early advocate of scaling, a hypothesis that AI models would improve in performance given vast amounts of computing power. The idea and its execution kicked off a wave of AI investment in chips, data centers and energy, laying the groundwork for generative AI advances like ChatGPT. Sutskever said he will approach scaling in a different way than his former employer, without sharing details. "Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?" he said. "Some people can work really long hours and they'll just go down the same path faster. It's not so much our style. But if you do something different, then it becomes possible for you to do something special." (Reporting by Kenrick Cai and Anna Tong in San Francisco and Krystal Hu in New York; Additional reporting by Jeffrey Dastin; Editing by Kenneth Li and Edwina Gibbs)
[9]
Ilya Sutskever's AI Startup, Safe Superintelligence, Raises $1 Billion
SSI, with a current team of 10 employees, plans to use the funds to acquire computing power and hire top talent. The company aims to build a small, highly trusted team of researchers and engineers, with operations in both Palo Alto, California, and Tel Aviv, Israel, according to a Reuters report. While the company declined to disclose its valuation, sources close to the matter revealed it to be $5 billion. The funding highlights that some investors are still willing to make significant bets on exceptional talent focused on foundational AI research. This is despite a general decline in interest in funding such companies, which can be unprofitable for extended periods, leading several startup founders to leave for tech giants, the report added. "It's important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross said in an interview. SSI plans to partner with cloud providers and chip companies to meet its computing power needs, though it has not yet decided which firms it will collaborate with. AI startups often rely on companies like Microsoft and Nvidia to support their infrastructure requirements. Sutskever, an early proponent of the scaling hypothesis -- which suggests that AI models improve with increased computing power -- played a key role in sparking a surge of AI investments in chips, data centers, and energy. This foundation has enabled advances in generative AI, such as ChatGPT. While Sutskever mentioned that he will approach scaling differently than his previous employer, he did not provide further details. "Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?" he said. "Some people can work really long hours and they'll just go down the same path faster. It's not so much our style. But if you do something different, then it becomes possible for you to do something special." Sutskever founded Safe Superintelligence in June. The company, headquartered in Palo Alto with offices in Tel Aviv, is led by Sutskever, entrepreneur and investor Daniel Gross, and former OpenAI employee Daniel Levy. Gross previously co-founded the AI startup Cue, which Apple acquired in 2013 for $40-60 million. SSI has established the world's first lab dedicated solely to developing safe superintelligence. The company's mission is clear: to build a safe superintelligence. "We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product. We will do it through revolutionary breakthroughs produced by a small cracked team," said Sutskevar. The company emphasised that safety and capabilities will be addressed simultaneously as technical problems require revolutionary engineering and scientific breakthroughs. SSI aims to advance capabilities rapidly while ensuring that safety remains paramount. Sutskever left OpenAI in May, where he was succeeded by Jakub Pachocki. Last year, reports surfaced that Sutskever was concerned about AGI safety and the rapid pace at which OpenAI was advancing, leading to tensions with OpenAI chief Sam Altman. On November 17, 2023, Sutskever and other board members fired Altman. However, by November 21, 2023, the board's decision was reversed, and Altman was reinstated as CEO. Sutskever publicly expressed regret for his role in the coup, stating that he never intended to harm OpenAI and deeply regretted his participation in the board's actions.
[10]
Exclusive-OpenAI co-founder Sutskever's new safety-focused AI startup SSI raises $1 billion
The company declined to share its valuation but sources close to the matter said it was valued at $5 billion.The funding underlines how some investors are still willing to make outsized bets on exceptional talent focused on foundational AI research. That's despite a general waning in interest towards funding such companies which can be unprofitable for some time, and which has caused several startup founders to leave their posts for tech giants. Investors included top venture capital firms Andreessen Horowitz, Sequoia Capital, DST Global and SV Angel. NFDG, an investment partnership run by Nat Friedman and SSI's Chief Executive Daniel Gross, also participated. "It's important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross said in an interview. AI safety, which refers to preventing AI from causing harm, is a hot topic amid fears that rogue AI could act against the interests of humanity or even cause human extinction. A California bill seeking to impose safety regulations on companies has split the industry. It is opposed by companies like OpenAI and Google, and supported by Anthropic and Elon Musk's xAI. Sutskever, 37, is one of the most influential technologists in AI. He co-founded SSI in June with Gross, who previously led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher. Sutskever is chief scientist and Levy is principal scientist, while Gross is responsible for computing power and fundraising. NEW MOUNTAIN Sutskever said his new venture made sense because he "identified a mountain that's a bit different from what I was working on." Last year, he was a part of the board of OpenAI's non-profit parent which voted to oust OpenAI CEO Sam Altman over a "breakdown of communications." Within days, he reversed his decision and joined nearly all of OpenAI's employees in signing a letter demanding Altman's return and the board's resignation. But the turn of events diminished his role at OpenAI. He was removed from the board and left the company in May. After Sutskever's departure, the company dismantled his "Superalignment" team, which worked to ensure AI stays aligned with human values to prepare for a day when AI exceeds human intelligence. Unlike OpenAI's unorthodox corporate structure, implemented for AI safety reasons but which made Altman's ouster possible, SSI has a regular for-profit structure. SSI is currently very much focused on hiring people who will fit in with its culture. Gross said they spend hours vetting if candidates have "good character", and are looking for people with extraordinary capabilities rather than overemphasizing credentials and experience in the field. "One thing that excites us is when you find people that are interested in the work, that are not interested in the scene, in the hype," he added. SSI says it plans to partner with cloud providers and chip companies to fund its computing power needs but hasn't yet decided which firms it will work with. AI startups often work with companies such as Microsoft and Nvidia to address their infrastructure needs. Sutskever was an early advocate of scaling, a hypothesis that AI models would improve in performance given vast amounts of computing power. The idea and its execution kicked off a wave of AI investment in chips, data centers and energy, laying the groundwork for generative AI advances like ChatGPT. Sutskever said he will approach scaling in a different way than his former employer, without sharing details. "Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?" he said. "Some people can work really long hours and they'll just go down the same path faster. It's not so much our style. But if you do something different, then it becomes possible for you to do something special." (Reporting by Kenrick Cai and Anna Tong in San Francisco and Krystal Hu in New York; Additional reporting by Jeffrey Dastin; Editing by Kenneth Li and Edwina Gibbs)
[11]
OpenAI co-founder Ilya Sutskever's new AI firm raises $1 billion
Safe Superintelligence (SSI), Sutskever's AI startup, said in a post on X, formerly Twitter, that its investors include Andreessen Horowitz, Sequoia Capital, DST Global and SV Angel. NFDG also participated in the fundraising round. NFDG is a venture capital partnership between Nat Friedman, former chief executive of Github, and SSI's co-founder Daniel Gross. SSI was founded in June with a focus on building safe AI models, shortly after Sutskever left OpenAI, where he was chief scientist and led a team focused on developing safety systems to control AI and keep it in line with a set of human values. "Building safe superintelligence is the most important technical problem of our time," SSI said on its website. The company currently has offices in Palo Alto, California and Tel Aviv, Israel, and wants form a small team of "top technical talent" including engineers and researchers, according to the website. Sutskever left OpenAI in May, months after helping with the initial ousting of Chief Executive Sam Altman, a move he said he regretted just days later. News Corp, owner of The Wall Street Journal and Dow Jones Newswires, has a content-licensing partnership with OpenAI.
[12]
SSI AI makes a billion-dollar bet
SSI AI, a new startup founded by former OpenAI chief scientist Ilya Sutskever, has raised a staggering $1 billion in its first round of funding. With many investors pulling out of AI startups due to concerns about profitability, SSI AI's mission to create secure superintelligence has officially been sent to its former company, OpenAI. Focused on developing AI systems that exceed human capabilities, the startup has already assembled a core team of researchers and engineers split between California and Israel. At the heart of SSI AI's vision is a commitment to creating systems that are secure and capable of overcoming the limitations of current AI. Sutskever, who was instrumental in shaping OpenAI's research, is now focused on building what his team describes as "secure superintelligence." Still in its early stages, the company has received backing from venture capital giants such as Andreessen Horowitz and Sequoia Capital. Despite keeping a low profile on its overall valuation, SSI AI is said to be worth around $5 billion. The new company's mission is very familiar. It is a not-for-profit company, just like OpenAI's mission before it shook hands with Microsoft. For those who don't know, OpenAI had this phrase in its mission statement before it agreed with Microsoft. And Sutskever's departure was influenced by not carrying this phrase to the rumors. According to Reuters, This funding was a clear demonstration of the willingness of some investors to back highly specialized AI research, even in an environment of industry skepticism. Unlike other tech startups, SSI AI's uniqueness lies in its focus on safety, which has become an increasingly prominent concern in the AI debate. The team's emphasis on preventing AI from posing potential risks to society has attracted both attention and discussion, especially in light of recent regulatory debates in California. Although still a small business, SSI AI aims to grow rapidly. The funds raised will allow the company to expand its computing resources and hire top talent. With plans to partner with major cloud and chip providers for infrastructure, SSI AI is poised to become a serious competitor in the AI space. However, the company has yet to announce which providers it will collaborate with and is currently in a vacuum on how it will secure the vast computing power necessary for its ambitious projects. What sets SSI AI apart is its approach to scaling AI. Sutskever's previous work at OpenAI was largely focused on the scaling hypothesis - the idea that AI performance increases significantly with access to more computing power. But his new initiative aims to approach scaling differently, avoiding making a direct copy of his previous efforts. SSI AI's hiring process is thorough and evaluates not just skills but also the cultural fit of potential employees with the company's values. Daniel Gross, CEO of SSI AI, emphasizes the importance of thoroughly evaluating candidates for their skills and integrity to maintain a dependable and purpose-driven team. Hopefully, they will maintain this mission if they grow into a large company in the future. SSI AI's journey is just beginning, but the company's large initial funding and the reputation of its leadership suggest that it is on track to make a major impact. While the startup has not announced concrete plans for its first products or services, the company is expected to focus on basic AI research and spend several years on research and development before bringing anything to market. With headquarters spanning two continents, will SSI AI be able to lay the foundations for what could be one of the most talked about AI startups soon? As concerns about the potential risks of AI continue to grow, everyone will be watching closely to see how the company plans to address the issue of AI safety. Sutskever's departure from OpenAI earlier this year has added to the intrigue surrounding SSI AI. Having played a key role in shaping OpenAI's work, Sutskever's new venture represents both a continuation and a departure from his previous work.
[13]
Ex-OpenAI co-founder's new Safe Superintelligence startup raises $1B in three months
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Ilya Sutskever's incredible journey from OpenAI co-founder and Chief Scientist to alleged attempted coup ringleader against friend Sam Altman to journeyman in the tech wilderness has reached a new milestone: a $1 billion funding round for his new venture from some of the biggest venture capital names in Silicon Valley, including Andreessen Horowitz, Sequoia Capital, DST Global, SV Angel NFDG. According to Reuters, which broke the news, Sutskever's company Safe Superintelligence Inc. (SSI), which he co-founded in June 2024 with fellow former OpenAI researcher Daniel Levy and Apple's former AI lead and Cue co-founder Daniel Gross, just earned the massive check in cash and is now valued at $5 billion. Sutskever took to X to celebrate the news and acknowledge the massive challenge ahead of him and his collaborators, writing: "Mountain: identified. Time to climb." On its website, SSI writes that it will eschew the productization that OpenAI and other AI startups have pursued -- and which allegedly led to Sutskever and other researchers' increasing disillusionment with Altman -- instead focusing entirely on developing a "safe" artificial "superintelligence," the latter term referring to AI that is vastly smarter and more capable than most (or all) human beings. As SSI's website states: "SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI. We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace. Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures." The company, valued at $5 billion according to sources, has just 10 employees and is split between Palo Alto and Tel Aviv. The funding will be used to acquire computing power and attract top talent. Despite general industry trends of waning interest in long-term AI research, the company has gained strong financial backing. Reuters reports that SSI plans to focus on research and development over the next few years, with an emphasis on building a trusted, skilled team. Instead of prioritizing credentials, SSI seeks individuals with strong character and dedication to the mission. The company seized the news of its new funding to open a call to new software engineering hires, with those interested invited to apply online here. For enterprise decision makers, the news shows continued faith in AI products led by top talent, as well as indicating a potential major new AI model rival to OpenAI and other industry leaders. It also signals an increasing war for top software engineering talent, now fueled by some of the biggest names in the Valley.
[14]
Sutskever strikes AI gold with billion-dollar backing for superintelligent AI
Top venture firms back SSI to develop "safe" AI with teams in Palo Alto and Tel Aviv. On Wednesday, Reuters reported that Safe Superintelligence (SSI), a new AI startup cofounded by OpenAI's former chief scientist Ilya Sutskever, has raised $1 billion in funding. The three-month-old company plans to focus on developing what it calls "safe" AI systems that surpass human capabilities. Further Reading The fundraising effort shows that even amid growing skepticism around massive investments in AI tech that so far have failed to be profitable, some backers are still willing to place large bets on high-profile talent in foundational AI research. Venture capital firms like Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel participated in the SSI funding round. SSI aims to use the new funds for computing power and attracting talent. With only 10 employees at the moment, the company intends to build a larger team of researchers across locations in Palo Alto and Tel Aviv, Reuters reported. While SSI did not officially disclose its valuation, sources told Reuters it was valued at $5 billion -- which is a stunningly large amount just three months after the company's founding and with no publicly-known products yet developed. Son of OpenAI Much like Anthropic before it, SSI formed as a breakaway company founded in part by former OpenAI employees. Sutskever, 37, cofounded SSI with Daniel Gross, who previously led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher. Further Reading Sutskever's departure from OpenAI followed a rough period at the company that reportedly included disenchantment that OpenAI management did not devote proper resources to his "superalignment" research team and then Sutskever's involvement in the brief ouster of OpenAI CEO Sam Altman last November. After leaving OpenAI in May, Sutskever said his new company would "pursue safe superintelligence in a straight shot, with one focus, one goal, and one product." Superintelligence, as we've noted previously, is a nebulous term for a hypothetical technology that would far surpass human intelligence. There is no guarantee that Sutskever will succeed in his mission (and skeptics abound), but the star power he gained from his academic bona fides and being a key cofounder of OpenAI has made rapid fundraising for his new company relatively easy. The company plans to spend a couple of years on research and development before bringing a product to market, and its self-proclaimed focus on "AI safety" stems from the belief that powerful AI systems that can cause existential risks to humanity are on the horizon. The "AI safety" topic has sparked debate within the tech industry, with companies and AI experts taking different stances on proposed safety regulations, including California's controversial SB-1047, which may soon become law. Since the topic of existential risk from AI is still hypothetical and frequently guided by personal opinion rather than science, that particular controversy is unlikely to die down any time soon.
[15]
OpenAI Co-Founder Ilya Sutskever's AI Startup Raises $1B From Andreessen Horowitz, Sequoia As Sam Altman-Led Company Faces Exodus
Safe Superintelligence (SSI), co-founded by Ilya Sutskever, has raised $1 billion to develop advanced artificial intelligence systems. What Happened: SSI, established just three months ago, aims to build safe AI systems that surpass human capabilities. The startup, currently with 10 employees, plans to use the funds to acquire computing power and hire top talent. It operates from Palo Alto, California, and Tel Aviv, Israel, the company confirmed Reuters report on Thursday. Investors in this funding round include prominent venture capital firms such as Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. NFDG, an investment partnership run by Nat Friedman and SSI's CEO Daniel Gross, also participated. Gross emphasized the importance of having investors who understand and support SSI's mission to develop safe superintelligence. He stated, "It's important for us to be surrounded by investors who understand, respect and support our mission." Sutskever, who co-founded SSI in June, is a significant figure in AI technology. He previously served as OpenAI's chief scientist and was involved in the controversial ousting and subsequent reinstatement of OpenAI CEO Sam Altman. See Also: Apple, Nvidia Key Supplier TSMC Leads Taiwanese Chip Giants To Localize Neon Gas Production By 2025 Why It Matters: The launch of SSI comes at a critical juncture for Sutskever, who resigned from OpenAI in May. His departure followed significant internal turmoil, including a "mass exodus" from OpenAI's AI safety team, which raised concerns about the company's leadership under CEO Altman. Just a month after leaving OpenAI, Sutskever launched SSI, emphasizing that "superintelligence is within reach." This new venture aims to address the very AI safety issues that plagued his former employer. Moreover, Sutskever's reputation in the AI community is notable. Elon Musk once described him as the "linchpin for OpenAI being successful," highlighting the high regard in which he is held. Read Next: Mark Cuban Explains Why, Despite Getting A Lot Of Flak From Elon Musk And Others, He Continues To Post On X: 'Don't Want to Be Where Everyone Agrees with Me' Sam Altman Ilya Sutskever | Photos courtesy: Flickr and Stanford University This story was generated using Benzinga Neuro and edited by Kaustubh Bagalkote Market News and Data brought to you by Benzinga APIs
[16]
OpenAI co-founder Ilya Sutskever's 'safe' AI start-up raises $1bn
OpenAI co-founder Ilya Sutskever has raised $1bn from investors including Sequoia and Andreessen Horowitz for a new business building "safe" artificial intelligence models. The deal values the 3-month old company, which currently has no product, at around $5bn, according to a person familiar with the matter. Sutskever's start-up, Safe Superintelligence (SSI), will spend the new investment on computing resources to develop its model and new staff to join his 10-person team. The former OpenAI chief scientist founded the company alongside serial AI investors Nat Friedman and Daniel Gross, as well as Daniel Levy, a former OpenAI researcher. "We've identified a new mountain to climb that's a bit different from what I was working on previously. We're not trying to go down the same path faster. If you do something different, then it becomes possible for you to do something special," Sutskever told the Financial Times. The company is building cutting-edge AI models and aiming to challenge more established rivals including Sutskever's former employer OpenAI, Anthropic and Elon Musk's xAI. OpenAI is currently in talks with investors about raising billions of dollars at a valuation of more than $100bn, while Anthropic and xAI were both valued at close to $20bn in funding rounds earlier this year. While those companies are all developing AI models with wide consumer and business applications, SSI said it is focused on "building a straight shot to safe superintelligence". "It's important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross, SSI's chief executive, told Reuters, which first reported the news. Sutskever left OpenAI in May after leading a failed coup against chief executive Sam Altman. Sutskever's team -- which was focused on "alignment", ensuring that AI systems that surpass human intelligence will act in the human interest -- was also disbanded. One executive at OpenAI said Sutskever's exit was due to a difference of opinion on how best to scale systems to gain intelligence when the company was focused on more near-term goals. SSI is now looking to hire in a competitive labour market for those with AI expertise. The company has offices in Palo Alto, California, and Tel Aviv, Israel. "We are assembling a lean, cracked team of the world's best engineers and researchers dedicated to focusing on SSI and nothing else," the company said on its website. "We offer an opportunity to do your life's work and help solve the most important technical challenge of our age."
[17]
Former OpenAI chief scientist's startup raises $1 billion
As the advent of generative artificial intelligence raises safety concerns, a new startup developing safe AI systems has raised $1 billion. Safe Superintelligence (SSI), co-founded by OpenAI co-founder and former chief scientist Ilya Sutskever, has raised $1 billion in cash. The funds will be used for computing power and to hire researchers and engineers, executives told Reuters. The startup will focus on building "safe superintelligence," or safe AI systems that have human-level reasoning and above. "SSI is our mission, our name, and our entire product roadmap, because it is our sole focus," the co-founders wrote in a public letter. "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures." After its funding round, which counted heavyweights Andreessen Horowitz and Sequoia Capital as investors, the startup is valued at $5 billion, Reuters reported, citing unnamed people familiar with the matter. SSI declined to share its valuation with Reuters. "It's important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross told Reuters. Sutskever co-founded the 10-person startup in June with Daniel Gross, former AI lead at Apple, and Daniel Levy, former OpenAI researcher. Gross is chief executive of the safe AI startup, and Levy is principal scientist - with Sutskever serving as chief scientist. Sutskever left OpenAI in May after almost a year leading the startup's "superalignment" team, which was focused on AI's existential dangers. The team was disbanded after Sutskever and his co-lead, Jan Leike, resigned from the ChatGPT-maker. Sutskever, who was part of OpenAI for almost a decade, played a role in briefly ousting OpenAI chief executive Sam Altman in November. "We've identified a new mountain to climb that's a bit different from what I was working on previously," Sutskever said in a statement shared with Quartz. "We're not trying to go down the same path faster. If you do something different, then it becomes possible for you to do something special."
[18]
OpenAI Co-Founder Ilya Sutskever's Safe AI Startup Raises $1 Billion - Decrypt
Months after resigning from AI developer OpenAI, former chief scientist Ilya Sutskever's new venture Safe Superintelligence (SSI) has raised $1 billion in funding, the company announced on Wednesday. According to SSI, the funding round included investments from NFDG, a16z, Sequoia, DST Global, and SV Angel. Reuters, citing sources "close to the matter," reported that SSI is already valued at $5 billion. Safe Superintelligence did not immediately respond to a request for comment from Decrypt. In May, Sutskever and Jan Leike resigned from OpenAI following the departure of Andrej Karpathy in February. In a post on Twitter, Leike cited a lack of resources and safety focus as the reason for his decision to leave the ChatGPT developer. "Stepping away from this job has been one of the hardest things I have ever done," Leike wrote. "Because we urgently need to figure out how to steer and control AI systems much smarter than us." Sutskever's departure came, according to a report by The New York Times, after he led the OpenAI board and a handful of OpenAI executives to oust co-founder and CEO Sam Altman in November 2023. Altman was reinstated a week later. In June, Sutskever announced the launch of his new AI development company, Safe Superintelligence Inc., which was co-founded by Daniel Gross, a former AI lead at Apple, and Daniel Levy, who also previously worked at OpenAI. According to Reuters, Sutskever serves as SSI's chief scientist, with Levy as principal scientist, and Gross handling computing power and fundraising. "SSI is our mission, our name, and our entire product roadmap, because it is our sole focus," Safe Superintelligence wrote on Twitter in June. "Our team, investors, and business model are all aligned to achieve SSI." With generative AI becoming more ubiquitous, developers have looked for ways to assure consumers and regulators that their products are safe. In August, OpenAI and Claude AI developer Anthropic announced agreements with the U.S. Department of Commerce's National Institute of Standards and Technology (NIST) to establish formal collaborations with the U.S. AI Safety Institute (AISI) that would give the agency access to major new AI models from both companies. "We are happy to have reached an agreement with the U.S. AI Safety Institute for pre-release testing of our future models," OpenAI co-founder and CEO Sam Altman wrote on Twitter. "For many reasons, we think it's important that this happens at the national level. [The] U.S. needs to continue to lead."
[19]
Ilya Sutskever's startup, Safe Superintelligence, raises $1B
Safe Superintelligence (SSI), the AI startup co-founded by former OpenAI chief scientist Ilya Sutskever, has raised over $1 billion in capital from investors including NFDG (an investment partnership run by Nat Friedman and SSI's CEO Daniel Gross), a16z, Sequoia, DST Global and SV Angel. SSI told Reuters that it plans to use the tranche to acquire computing power and hire talent, with a focus on building a team of researchers and engineers split between Palo Alto and Tel Aviv. As to what exactly they'll research (and who they might partner with), SSI isn't saying -- yet. Reuters, citing a source familiar, says that the new funding values SSI at $5 billion. Sutskever, who's lead scientist at SSI, co-launched the company earlier this year with Friedman and Daniel Levy, another ex-OpenAI researcher. Prior to SSI, Sutskever headed the now-dismantled Superalignment team at OpenAI, which focused on general safety research. Sutskever quietly departed OpenAI months after a highly-publicized fallout between him, several former OpenAI board members and OpenAI CEO Sam Altman over what Sutskever has referred to as a "breakdown in communications."
[20]
OpenAI Vet Sutskever's Startup Reportedly Raises $1 Billion | PYMNTS.com
Safe Superintelligence, the company co-founded by OpenAI veteran Ilya Sutskever, has reportedly raised $1 billion. The company plans to use the funds to boost its computing power and hire talent, management told Reuters in an interview published Wednesday (Sept. 4). Safe Superintelligence (SSI) declined to share its valuation, though sources told Reuters the firm is valued at $5 billion. Investors in the round included high-profile venture capital outfits like Andreessen Horowitz and Sequoia, along with NFDG, an investment partnership run in part by SSI CEO Daniel Gross. "It's important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross told Reuters. Meanwhile, Sutskever -- an OpenAI co-founder who had been the company's chief scientist -- said the new project made sense because he "identified a mountain that's a bit different from what I was working on." Last year, Sutskever was part of the board at OpenAI that voted to unseat CEO Sam Altman over a "breakdown of communications," though he quickly changed his mind and joined in an employee-led campaign for Altman's reinstatement. However, as Reuters notes, the incident "diminished" Sutskever's role at OpenAI. He was removed from the board and stepped down in May. After he left, the company dissolved his AI-safety-focused "superalignment" team. Sutskever announced the launch of SSI in June, saying the company would focus solely on developing -- as the name suggests -- safe superintelligence without the pressure that comes with commercial interests. As PYMNTS wrote at the time, this has once again sparked a debate about whether such a feat is possible. Some experts question the feasibility of creating a superintelligent AI, given the limitations of AI systems and the obstacles to ensuring its safety. "Critics of the superintelligence goal point to the current limitations of AI systems, which, despite their impressive capabilities, still struggle with tasks that require common sense reasoning and contextual understanding," that report said. "They argue that the leap from narrow AI, which excels at specific tasks, to a general intelligence that surpasses human capabilities across all domains is not merely a matter of increasing computational power or data."
Share
Share
Copy Link
Ilya Sutskever, co-founder of OpenAI, launches a new AI safety startup called Scaling Safety Inc. (SSI), securing $1 billion in funding. The company aims to address AI safety concerns and develop advanced AI systems.
Ilya Sutskever, a prominent figure in the artificial intelligence community and co-founder of OpenAI, has embarked on a new venture focused on AI safety. His startup, Scaling Safety Inc. (SSI), has successfully raised $1 billion in funding, marking a significant milestone in the pursuit of safer AI technologies 1.
The substantial funding round for SSI has attracted attention from major players in the tech industry. While specific investors have not been disclosed, sources familiar with the matter suggest that Microsoft, a long-time backer of OpenAI, is among the contributors 2. This investment underscores the growing importance of AI safety in the tech world and the confidence in Sutskever's vision.
SSI's primary focus is on developing advanced AI systems with a strong emphasis on safety. The company aims to address critical concerns surrounding AI development, including potential risks and ethical considerations 3. By prioritizing safety from the outset, SSI seeks to create AI technologies that are not only powerful but also reliable and trustworthy.
The launch of SSI and its successful funding round have sent ripples through the AI industry. Experts view this development as a significant step towards addressing the complex challenges associated with AI safety 4. The substantial investment also highlights the growing recognition of the importance of responsible AI development among investors and tech giants.
Ilya Sutskever's background as a co-founder and chief scientist at OpenAI lends considerable credibility to SSI. His experience in developing large language models and his deep understanding of AI technologies position him as a key figure in the pursuit of safer AI systems 5. Sutskever's transition from OpenAI to SSI represents a focused effort to tackle the specific challenges of AI safety.
As SSI begins its journey with substantial funding, the AI community eagerly anticipates the company's contributions to the field of AI safety. The startup faces the challenge of developing innovative solutions to complex safety issues while keeping pace with the rapid advancements in AI technology. The success of SSI could potentially set new standards for responsible AI development and influence the direction of the entire industry.
Reference
[2]
U.S. News & World Report
|Exclusive-OpenAI Co-Founder Sutskever's New Safety-Focused AI Startup SSI Raises $1 Billion[4]
OpenAI, after securing a $6.6 billion investment, has asked its investors to refrain from funding five AI companies it considers close competitors. This move highlights the intensifying competition in the AI industry and OpenAI's efforts to maintain its market position.
8 Sources
OpenAI, the artificial intelligence company behind ChatGPT, is reportedly in discussions for a new funding round that could value the company at $150 billion. This move comes as the AI race intensifies and development costs soar.
19 Sources
OpenAI, the company behind ChatGPT, is reportedly in talks for a share sale that could value it at $80-$90 billion. Investors are betting on the potential of AI to revolutionize various industries, despite concerns about profitability and competition.
2 Sources
OpenAI is exploring a radical corporate restructuring that could potentially value the company at $150 billion. This move aims to address employee compensation issues and align with the company's mission, but faces significant legal and practical challenges.
10 Sources
OpenAI, the creator of ChatGPT, has raised $10 billion in just one week through a combination of venture funding and a credit facility. This massive influx of capital comes as the company faces significant financial challenges and debates over its future direction.
66 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved