The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Sat, 14 Dec, 8:02 AM UTC
5 Sources
[1]
What Did Ilya See?
Ilya Sutskever, often touted as the GOAT of AI, believes superintelligent AI will be 'unpredictable. When the co-founder and former chief scientist of OpenAI, Ilya Sutskevar speaks, the world listens. At NeurIPS 2024, he touched upon the unpredictability of reasoning in AI, saying that the more a system reasons, the more unpredictable it becomes. "The more it reasons, the more unpredictable it becomes," he said. "All the deep learning that we've been used to is very predictable because we've been working on replicating human intuition." Sutskever pointed out that systems capable of reasoning, such as advanced chess-playing AI (the likes of AlphaZero, etc.), are already demonstrating unpredictability. "The best chess AIs are unpredictable to the best human chess players," he said. It is only about time, these AI systems start getting smarter, to an extent of achieving superintelligence. He said that in the future artificial superintelligent systems (ASI) will be able to understand complex concepts from limited data and will no longer be prone to confusion. Sutskever said that reasoning models will reduce errors like hallucinations by "autocorrecting" themselves in far more sophisticated ways. "AI systems that reason will be able to correct themselves, much like autocorrect -- but far grander," he added. "They will understand things from limited data. They will not get confused," he said, hinting at a possibility of 'self-aware AI,' which he views as a natural development. "Self-awareness is part of our own world models," he said. Sutskever believes artificial superintelligence will evolve into truly agentic systems. "Right now, the systems are not agents in any meaningful sense, just very slightly agentic," he said. "But eventually, those systems are actually going to be agentic in real ways." In June 2024, Sutskevar launched his new AI startup called Safe Superintelligence Inc. (SSI), alongside Daniel Gross (former head of Apple AI) and Daniel Levy (investor and AI researcher). SSI is dedicated to developing safe and advanced AI systems, with a primary goal of achieving 'safe superintelligence.' Unlike many AI companies, it focuses on long-term safety and progress, avoiding the pressure of quick profits and product releases. Sutskevar said that the age of pre-training is over. "Pre-training as we know it will unquestionably end," he said, citing the limitations of data availability. "We have but one internet. You could even go as far as to say that data is the fossil fuel of AI. It was created somehow, and now we use it." He acknowledged, saying that AI's current progress stems from scaling models and data, but other scaling principles might emerge. "I want to highlight that what we are scaling now is just the first thing we figured out how to scale," said Sutskever. Citing OpenAI's o1, he highlighted the growing focus on agents and synthetic data as pivotal to the future of AI, while acknowledging the challenges in defining synthetic data and optimising inference time compute. "People feel like agents are the future, more concretely, but also a little bit vaguely, synthetic data," he said. Drawing parallels to biological systems, Sutskever spoke about how nature might inspire the next breakthroughs. He referenced brain-to-body size scaling in mammals as a potential model for rethinking AI's architecture. Instead of linear improvements through scaling datasets and models, future AI systems might adopt entirely new scaling principles, guided by biology's efficiency and adaptability. "There's a precedent in biology for different scaling," he said, suggesting that AI could evolve in ways we have yet to fully understand. Sutskever opened his talk at NeurIPS 2024 by revisiting a presentation from 10 years ago, where he and his colleagues introduced the concept of training large neural networks for tasks like translation. "If you have a large neural network with 10 layers, it can do anything that a human being can do in a fraction of a second," he quipped. This idea was rooted in the belief that artificial neurons could mimic biological neurons, with the assumption that the human brain's ability to process information quickly could be replicated in a neural network Sutskever pointed out how early models, including LSTMs, relied on basic parallelisation techniques like pipelining. He shared how these models used one layer per GPU to speed up training, achieving a 3.5x speedup with eight GPUs.Sutskever also touched on the origins of the scaling hypothesis, which posits that success in AI is guaranteed when larger datasets and neural networks are combined. He credited OpenAI's Alec Radford, Anthropic's Dario Amodei and Jared Kaplan for their roles in advancing this concept and laying the groundwork for the GPT models
[2]
AI with reasoning power will be less predictable, Ilya Sutskever says
VANCOUVER (Reuters) - Former OpenAI chief scientist Ilya Sutskever, one of the biggest names in artificial intelligence, had a prediction to make on Friday: reasoning capabilities will make technology far less predictable. Accepting a "Test Of Time" award for his 2014 paper with Google's Oriol Vinyals and Quoc Le, Sutskever said a major change was on AI's horizon. An idea that his team had explored a decade ago, that scaling up data to "pre-train" AI systems would send them to new heights, was starting to reach its limits, he said. More data and computing power had resulted in ChatGPT that OpenAI launched in 2022, to the world's acclaim. "But pre-training as we know it will unquestionably end," Sutskever declared before thousands of attendees at the NeurIPS conference in Vancouver. "While compute is growing," he said, "the data is not growing, because we have but one internet." Sutskever offered some ways to push the frontier despite this conundrum. He said technology itself could generate new data, or AI models could evaluate multiple answers before settling on the best response for a user, to improve accuracy. Other scientists have set sights on real-world data. But his talk culminated in a prediction for a future of superintelligent machines that he said "obviously" await, a point with which some disagree. Sutskever this year co-founded Safe Superintelligence Inc in the aftermath of his role in Sam Altman's short-lived ouster from OpenAI, which he said within days he regretted. Long-in-the-works AI agents, he said, will come to fruition in that future age, have deeper understanding and be self-aware. He said AI will reason through problems like humans can. There's a catch. "The more it reasons, the more unpredictable it becomes," he said. Reasoning through millions of options could make any outcome non-obvious. By way of example, AlphaGo, a system built by Alphabet's DeepMind, surprised experts of the highly complex board game with its inscrutable 37th move, on a path to defeating Lee Sedol in a match in 2016. Sutskever said similarly, "the chess AIs, the really good ones, are unpredictable to the best human chess players." AI as we know it, he said, will be "radically different." (Reporting By Jeffrey Dastin and Anna Tong in Vancouver; Editing by Sam Holmes)
[3]
AI with reasoning power will be less predictable, Ilya Sutskever says
VANCOUVER, Dec 13 (Reuters) - Former OpenAI chief scientist Ilya Sutskever, one of the biggest names in artificial intelligence, had a prediction to make on Friday: reasoning capabilities will make technology far less predictable. Accepting a "Test Of Time" award for his 2014 paper, opens new tab with Google's (GOOGL.O), opens new tab Oriol Vinyals and Quoc Le, Sutskever said a major change was on AI's horizon. An idea that his team had explored a decade ago, that scaling up data to "pre-train" AI systems would send them to new heights, was starting to reach its limits, he said. More data and computing power had resulted in ChatGPT that OpenAI launched in 2022, to the world's acclaim. "But pre-training as we know it will unquestionably end," Sutskever declared before thousands of attendees at the NeurIPS conference in Vancouver. "While compute is growing," he said, "the data is not growing, because we have but one internet." Sutskever offered some ways to push the frontier despite this conundrum. He said technology itself could generate new data, or AI models could evaluate multiple answers before settling on the best response for a user, to improve accuracy. Other scientists have set sights on real-world data. But his talk culminated in a prediction for a future of superintelligent machines that he said "obviously" await, a point with which some disagree. Sutskever this year co-founded Safe Superintelligence Inc in the aftermath of his role in Sam Altman's short-lived ouster from OpenAI, which he said within days he regretted. Long-in-the-works AI agents, he said, will come to fruition in that future age, have deeper understanding and be self-aware. He said AI will reason through problems like humans can. There's a catch. "The more it reasons, the more unpredictable it becomes," he said. Reasoning through millions of options could make any outcome non-obvious. By way of example, AlphaGo, a system built by Alphabet's DeepMind, surprised experts of the highly complex board game with its inscrutable 37th move, on a path to defeating Lee Sedol in a match in 2016. Sutskever said similarly, "the chess AIs, the really good ones, are unpredictable to the best human chess players." AI as we know it, he said, will be "radically different." Reporting By Jeffrey Dastin and Anna Tong in Vancouver; Editing by Sam Holmes Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial Intelligence Jeffrey Dastin Thomson Reuters Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history. He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022.
[4]
OpenAI co-founder Ilya Sutskever believes superintelligent AI will be 'unpredictable'
OpenAI co-founder Ilya Sutskever spoke on a range of topics at NeurIPS, the annual AI conference, before accepting an award for his contributions to the field. Sutskever gave his predictions for "superintelligent" AI, AI more capable than humans at many tasks, which he believes will be achieved at some point. Superintelligent AI will be "different, qualitatively" from the AI we have today, Sutskever said -- and in some aspects unrecognizable. "[Superintelligent] systems are actually going to be agentic in a real way," Suktsever said, as opposed to the current crop of "very slightly agentic" AI. They'll "reason," and, as a result, become more unpredictable. They'll understand things from limited data. And they'll be self-aware, Sutskever believes. They may want rights, in fact. "It's not a bad end result if you have AIs and all they want is to co-exist with us and just to have rights," Sutskever said. After leaving OpenAI, Sutskever founded a lab, Safe Superintelligence (SSI), focused on general AI safety. SSI raised $1 billion in September.
[5]
OpenAI cofounder Ilya Sutskever says the way AI is built is about to change
OpenAI's cofounder and former chief scientist, Ilya Sutskever, made headlines earlier this year after he left start his own AI lab called Safe Superintelligence Inc. He has avoided the limelight since his departure but made a rare public appearance in Vancouver on Friday at the Conference on Neural Information Processing Systems (NeurIPS). "Pre-training as we know it will unquestionably end," Sutskever said onstage. This refers to the first phase of AI model development, when a large language model learns patterns from vast amounts of unlabeled data -- typically text from the internet, books, and other sources. During his NeurIPS talk, Sutskever said that, while he believes existing data can still take AI development farther, the industry is tapping out on new data to train on. This dynamic will, he said, eventually force a shift away from the way models are trained today. He compared the situation to fossil fuels: just as oil is a finite resource, the internet contains a finite amount of human-generated content. "We've achieved peak data and there'll be no more," according to Sutskever. "We have to deal with the data that we have. There's only one internet." Next-generation models, he predicted, are going to "be agentic in a real ways." Agents have become a real buzzword in the AI field. While Sutskever didn't define them during his talk, they are commonly understood to be an autonomous AI system that performs tasks, makes decisions, and interacts with software on its own. Along with being "agentic," he said future systems will also be able to reason. Unlike today's AI, which mostly pattern-matches based on what a model has seen before, future AI systems will be able to work things out step-by-step in a way that is more comparable to thinking. The more a system reasons, "the more unpredictable it becomes," according to Sutskever. He compared the unpredictability of "truly reasoning systems" to how advanced AIs that play chess "are unpredictable to the best human chess players." "They will understand things from limited data," he said. "They will not get confused." On stage, he drew a comparison between the scaling of AI systems and evolutionary biology, citing research that shows the relationship between brain and body mass across species. He noted that while most mammals follow one scaling pattern, hominids (human ancestors) show a distinctly different slope in their brain-to-body mass ratio on logarithmic scales. He suggested that, just as evolution found a new scaling pattern for hominid brains, AI might similarly discover new approaches to scaling beyond how pre-training works today. After Sutskever concluded his talk, an audience member asked him how researchers can create the right incentive mechanisms for humanity to create AI in a way that gives it "the freedoms that we have as homosapiens." "I feel like in some sense those are the kind of questions that people should be reflecting on more," Sutskever responded. He paused for a moment before saying that he doesn't "feel confident answering questions like this" because it would require a "top down government structure." The audience member suggested cryptocurrency, which made others in the room chuckle. "I don't feel like I am the right person to comment on cryptocurrency but there is a chance what you [are] describing will happen," Sutskever said. "You know, in some sense, it's not a bad end result if you have AIs and all they want is to coexist with us and also just to have rights. Maybe that will be fine... I think things are so incredibly unpredictable. I hesitate to comment but I encourage the speculation."
Share
Share
Copy Link
Ilya Sutskever, co-founder of OpenAI, discusses the future of AI at NeurIPS 2024, predicting the rise of unpredictable superintelligent AI and the end of current pre-training methods due to data limitations.
Ilya Sutskever, co-founder and former chief scientist of OpenAI, shared his insights on the future of artificial intelligence at the NeurIPS 2024 conference. Sutskever, widely regarded as a leading figure in AI, predicted that superintelligent AI systems will be fundamentally different from current models and potentially unpredictable 1.
"The more it reasons, the more unpredictable it becomes," Sutskever stated, drawing parallels to advanced chess AIs that are already unpredictable to top human players 2.
Sutskever made a bold claim about the future of AI development, stating that "pre-training as we know it will unquestionably end" 3. He cited the limitations of available data, comparing it to fossil fuels in terms of finite resources.
"We have but one internet. You could even go as far as to say that data is the fossil fuel of AI. It was created somehow, and now we use it," Sutskever explained 1.
Sutskever outlined several key characteristics of future AI systems:
To address the limitations of current pre-training methods, Sutskever suggested several potential approaches:
Sutskever's predictions raise important questions about the future of AI and its impact on society. He acknowledged the potential for AI systems to desire rights, stating, "It's not a bad end result if you have AIs and all they want is to co-exist with us and just to have rights" 4.
In response to questions about creating the right incentives for AI development, Sutskever emphasized the need for more reflection on these issues but expressed uncertainty about specific approaches 5.
Reference
[1]
[2]
OpenAI CEO Sam Altman's recent statements about achieving AGI and aiming for superintelligence have ignited discussions about AI progress, timelines, and implications for the workforce and society.
20 Sources
OpenAI CEO Sam Altman's recent blog post suggests superintelligent AI could emerge within 'a few thousand days,' stirring discussions about AI's rapid advancement and potential impacts on society.
12 Sources
OpenAI is reportedly on the verge of a significant breakthrough in AI reasoning capabilities. This development has sparked both excitement and concern in the tech community, as it marks a crucial step towards Artificial General Intelligence (AGI).
7 Sources
Ilya Sutskever, co-founder of OpenAI, warns that AI development is facing a data shortage, likening it to 'peak data'. This crisis could reshape the AI industry's future, forcing companies to seek alternative solutions.
3 Sources
Recent reports suggest that the rapid advancements in AI, particularly in large language models, may be hitting a plateau. Industry insiders and experts are noting diminishing returns despite massive investments in computing power and data.
14 Sources