Curated by THEOUTPOST
On Tue, 24 Sept, 8:07 AM UTC
12 Sources
[1]
OpenAI's Altman sees 'superintelligence' possible in a 'few thousand days' - but he's short on details
In just eight years from now, artificial intelligence (AI) may lead to something called "superintelligence", according to OpenAI CEO Sam Altman. "It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there," wrote Altman in an essay, labeled The Intelligence Age, on a website in his name. The post appears to be the only content on the website so far. Also: OpenAI expands o1 model availability - here's who gets access and how much On Monday, Altman posted a link to the post on X (formerly Twitter), which received 12,000 likes and 2,400 reposts by Tuesday afternoon: The Intelligence Age: https://t.co/vuaBNwp2bD -- Sam Altman (@sama) September 23, 2024 Altman has used the term superintelligence in interviews, such as one with the Financial Times a year ago. Altman has tended to equate superintelligence to the broad quest, in academia and industry, to achieve "artificial general intelligence" (AGI), which is a computer that can reason as well as or better than a human. In the 1,100-word essay, Altman makes a case for spreading AI to as many people as possible, as an advance in the "infrastructure of society" that will make it possible for a dramatic leap in human prosperity. Also: What is artificial general intelligence? "With these new abilities, we can have shared prosperity to a degree that seems unimaginable today," wrote Altman. "In the future, everyone's lives can be better than anyone's life is now. Prosperity alone doesn't necessarily make people happy -- there are plenty of miserable rich people -- but it would meaningfully improve the lives of people around the world." Altman's essay is short on technical details and makes a handful of sweeping claims about AI: Altman's essay runs counter to many popular concerns about AI's ethical, social, and economic impact that have gathered steam in recent years. Also: Trying to break OpenAI's new o1 models? You might get banned The notion that scaling-up computing will lead to a kind of superintelligence or AGI runs counter to what many scholars of AI have concluded, such as, for example, critic Gary Marcus, who argues that AGI, or anything like it, is nowhere near on the horizon if it is achievable at all. Altman's notion that scaling AI is the main path to better AI is controversial. Prominent AI scholar and entrepreneur Yoav Shoham told ZDNET last month that scaling-up computing will not be enough to boost AI. Instead, Shoham advocated scientific exploration outside of deep learning. Altman's optimistic view also doesn't make any mention of numerous issues of AI bias raised by scholars of the technology, nor is there any mention of the energy consumption of AI data centers that is expanding rapidly and that many believe poses serious environmental risk. Environmentalist Bill McKibbon, for example, has written that "there's no way we can build out renewable energy fast enough to meet this kind of extra demand" by AI, and that "in a rational world, faced with an emergency, we would put off scaling AI for now." Also: AI scientist: 'We need to think outside the large language model box' The timing of Altman's essay is noteworthy as it comes on the heels of some prominent critiques of AI recently published. These critiques include Marcus's Taming Silicon Valley, published this month by MIT Press, and AI Snake Oil, by Princeton computer science scholars Arvind Narayanan and Sayash Kapoor, published this month by Princeton University Press. In Taming Silicon Valley, Marcus warns of epic risks from generative AI systems unfettered by any societal control: In the worst case, unreliable and unsafe AI could lead to mass catastrophes, ranging from chaos in electrical grids to accidental war or fleets of robots run amok. Many could lose jobs. Generative AI's business models ignore copyright law, democracy, consumer safety, and impact on climate change. And because it has spread so fast, with so little oversight, Generative AI has in effect become a vast, uncontrolled experiment on our whole population. Marcus repeatedly calls out Altman for using hype to assert OpenAI's priorities, especially in promoting the imminent arrival of AGI. "One master stroke was to say that the OpenAI board would get together to determine when Artificial General Intelligence 'had been achieved'," writes Marcus of Altman's public remarks. "And few if any asked Altman why the important scientific question of when AGI was reached would be 'decided' by a board of directors rather than the scientific community." Also: How well can OpenAI's o1-preview code? It aced my 4 tests - and showed its work In their book, AI Snake Oil, which is a scathing denunciation of AI hype, Narayanan and Kapoor specifically call out Altman's public remarks about AI regulation, accusing him of engaging in a form of manipulation, known as "regulatory capture", to avoid any actual constraints on his company's power: Rather than meaningfully setting rules for the industry, the company [OpenAI] was looking to push the burden on competitors while avoiding any changes to its own structure. Tobacco companies tried something similar when they lobbied to stifle government action against cigarettes in the 1950s and '60s. It remains to be seen whether Altman will broaden his public remarks via his website or whether the essay is a one-shot affair, perhaps meant to counter other skeptical narratives.
[2]
Sam Altman says AI superintelligence could be just 'a few thousand days' away
OpenAI CEO Sam Altman says artificial intelligence could become smarter than humans sooner than many people expect. In a blog post on his personal site, Altman discussed what he's calling the Intelligence Age and said "It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there." Of course, "thousands" of days is pretty open-ended. 2,000 days is 5.5 years, while 5,000 days is just shy of 14, so while Altman is incredibly bullish on AI's future, he's not predicting overnight changes. Whereas generative AI's goal is to match the intellectual capabilities of humans, superintelligent AI looks to go even further, perhaps vastly outpacing the human brain's ability to assess problems and arrive at decisions. It's a technology that can stoke some of the biggest fears about AI given its potential. Altman, in his post, said society currently stands at the start of what he calls "The Intelligence Age," which he believes can be among the most transformative in human history. "I believe the future is going to be so bright that no one can do it justice by trying to write about it now; a defining characteristic of the Intelligence Age will be massive prosperity," he wrote. "Although it will happen incrementally, astounding triumphs - fixing the climate, establishing a space colony, and the discovery of all of physics - will eventually become commonplace. With nearly-limitless intelligence and abundant energy - the ability to generate great ideas, and the ability to make them happen - we can do quite a lot." While he was bullish on the future of AI, Altman did note the risk of downsides, including the impact on the labor market, but hedged that, saying, "most jobs will change more slowly than most people think" and that many of the jobs we do today will look like wastes of time in the future. "Nobody is looking back at the past, wishing they were a lamplighter," he wrote.
[3]
OpenAI CEO Sam Altman Says Superintelligence is Near - Here's What It Means
In a thought-provoking statement, Sam Altman, the visionary CEO of OpenAI, has predicted that superintelligence could become a reality within a mere few thousand days. This bold forecast underscores the rapid advancements in artificial intelligence (AI) and its potential to reshape our world in profound ways. Altman's vision of the future is rooted in the remarkable progress of deep learning, a subset of AI that holds the key to achieving Artificial General Intelligence (AGI). AGI represents a significant milestone in the evolution of AI, marking the point where machines can perform any intellectual task that a human can. This leap from narrow, task-specific AI to a more versatile and adaptable form of intelligence is what Altman believes will pave the way for superintelligence. The impact of AI extends far beyond the realm of technology; it promises to transform various sectors and transform the way we live and work. In healthcare, AI has the potential to enhance diagnostic accuracy, optimize treatment plans, and ultimately improve patient outcomes. Education, too, stands to benefit from AI-driven tools that can provide personalized learning experiences tailored to individual student needs. Moreover, AI's problem-solving capabilities can be harnessed to tackle complex global challenges, from climate change to economic modeling. But the transformative power of AI goes beyond specific industries. It has the potential to enhance human capabilities across the board, making tasks more efficient and effective. By augmenting human intelligence, AI can unlock new levels of productivity and innovation, pushing the boundaries of what is possible. Here are a selection of other articles from our extensive library of content you may find of interest on the subject of OpenAI: While the promise of AI is immense, it is not without its challenges. As AI continues to advance and integrate into the fabric of society, it is crucial to consider the economic and social implications. One of the most pressing concerns is the potential for job displacement, as AI automates routine tasks and renders certain roles obsolete. However, it is important to recognize that AI will also create new job opportunities, necessitating a shift in workforce skills and adaptability. Ensuring the responsible development and deployment of AI is another critical challenge. As the power of AI grows, so does the need for robust risk management and ethical considerations. Balancing the benefits of AI with the potential pitfalls requires careful navigation and proactive measures to ensure that AI is harnessed for the greater good. Despite the challenges, Altman's prediction paints a picture of a future brimming with possibilities. He envisions a world where AI contributes to limitless intelligence and abundant energy, amplifying human creativity and driving unprecedented innovation. This optimistic outlook highlights the potential for AI to be a force for positive change, enhancing our capabilities and improving quality of life on a global scale. As we stand on the cusp of this transformative era, it is essential to draw upon the lessons of history. Just as past technological revolutions, such as the Industrial Revolution, reshaped society and the workforce, AI will undoubtedly lead to new job transformations and societal shifts. By understanding this historical context, we can better anticipate and manage the long-term impact of AI, ensuring a smooth transition into a future where humans and machines work in harmony. In conclusion, Sam Altman's prediction of achieving superintelligence within a few thousand days serves as a powerful reminder of the rapid pace of AI development and its potential to reshape our world. As we navigate this uncharted territory, it is crucial to approach AI with a mix of excitement and caution, harnessing its transformative power while addressing the challenges it presents. By doing so, we can unlock a future of limitless possibilities, where AI serves as a fantastic option for positive change and human advancement.
[4]
OpenAI CEO Sam Altman anticipates superintelligence soon, defends AI in rare personal blog post
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI CEO Sam Altman penned a rare note on his website today spelling out more of his vision of the AI-powered future, or as he calls it (and his blog post is titled): "The Intelligence Age." Specifically, Altman argues that "deep learning works," and can generalize across a range of domains and difficult problem sets based on its training data, allowing people to "solve hard problems," including "fixing the climate, establishing a space colony, and the discovery of all physics." As he puts it: "That's really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying "rules" that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is." In a provocative statement that many AI industry participants and close observers have already seized upon in discussions on X, Altman also said that superintelligence -- AI that is "vastly smarter than humans," according to previous OpenAI statements -- may be achieved in "a few thousand days." "This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there." A thousand days is roughly 2.7 years, a time that is much sooner than the five years most experts give out. Many AI researchers, especially those from OpenAI, have been pursuing superintelligence, and a lower version is normally called artificial general intelligence (AGI). Former OpenAI chief scientist and co-founder Ilya Sutskever's new startup even focuses on safe superintelligence. AI models have begun performing well in "IQ tests," or knowledge benchmark tests, but they have not yet been better than humans. So far, most use cases of generative AI have not been around a computer program that is vastly smarter than an average human but as assistants to complement human workers as they finish tasks. AI experts for everyone Altman, however, believes that this use case of AI assistants and agents will be widespread in a few years. "There are a lot of details we still have to figure out, but it's a mistake to get distracted by any particular challenge," Altman said. "Deep learning works, and we will solve the remaining problems. We can say a lot of things about what may happen next, but the main one is that AI is going to get better with scale, and that will lead to meaningful improvements to the lives of people around the world." He added that AI will soon allow everyone to accomplish many things as each person will have a personal AI team with virtual experts in many areas and kids will have personal tutors for any subject. It's not a surprise Altman is an AI maximalist as he runs one of the leading AI companies. OpenAI recently released its most powerful AI model yet, o1, which is capable of reasoning without too much human instruction. Altman does point that there are several roadblocks facing this world of widespread AI use, like the need to make compute cheaper and the availability of advanced chips. He even alludes that not building out infrastructure to support AI development, "AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people." Not entirely positive Altman's not totally starry eyed about AI's potential, though. He notes that there will be downsides, stating: "It will not be an entirely positive story, but the upside is so tremendous that we owe it to ourselves, and the future, to figure out how to navigate the risks in front of us." Altman makes mention of people losing jobs to AI, something he's said before, a cursory nod to one of the biggest fears of those outside the tech world bubble. For Altman, labor under AI will change for both good and bad, but people will never run out of things to do. Altman's manifesto is not surprising to anyone who's followed the growth of OpenAI and generative AI in the past couple of years. The timing of his musings, however, did cause some to believe this all might be a way to get OpenAI's next round of funding. The company is reportedly raising $6-$6.5 billion, which will value it at $150 billion. However, it is interesting to see that Altman chose to post the message on his personal website rather than the official OpenAI company one, suggesting he views this more as his opinion rather than an official company line.
[5]
AI superintelligence looms in Sam Altman's new essay on "The Intelligence Age"
Altman says "deep learning worked" and will lead to "massive prosperity." On Monday, OpenAI CEO Sam Altman outlined his vision for an AI-driven future of tech progress and global prosperity in a new personal blog post titled "The Intelligence Age." The essay paints a picture of human advancement accelerated by AI, with Altman suggesting that superintelligent AI could emerge within the next decade. Further Reading "It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there," he wrote. OpenAI's current goal is to create AGI (artificial general intelligence), which is a term for hypothetical technology that could match human intelligence in performing many tasks without the need for specific training. In contrast, superintelligence surpasses AGI, and it could be seen as a hypothetical level of machine intelligence that can dramatically outperform humans at any intellectual task, perhaps even to an unfathomable degree. Superintelligence (sometimes called "ASI" for "artificial superintelligence") is a popular but sometimes fringe topic among the machine learning community, and it has been for years -- especially since controversial philosopher Nick Bostrom authored a book titled Superintelligence: Paths, Dangers, Strategies in 2014. Former OpenAI co-founder and Chief Scientist Ilya Sutskever left OpenAI in June to found a company with the term in its name: Safe Superintelligence. Meanwhile, Altman himself has been talking about developing superintelligence since at least last year. So, just how long is "a few thousand days"? There's no telling exactly. The likely reason Altman picked a vague number is because he doesn't exactly know when ASI will arrive, but it sounds like he thinks it could happen within a decade. For comparison, 2,000 days is about 5.5 years, 3,000 days is around 8.2 years, and 4,000 days is almost 11 years. Further Reading It's easy to criticize Altman's vagueness here; no one can truly predict the future, but Altman, as CEO of OpenAI, is likely privy to AI research techniques coming down the pipeline that aren't broadly known to the public. So even when couched with a broad time frame, the claim comes from a noteworthy source in the AI field -- albeit one who is heavily invested in making sure that AI progress does not stall. Not everyone shares Altman's optimism and enthusiasm. Computer scientist and frequent AI critic Grady Booch quoted Altman's "few thousand days" prediction and wrote on X, "I am so freaking tired of all the AI hype: it has no basis in reality and serves only to inflate valuations, inflame the public, garnet [sic] headlines, and distract from the real work going on in computing." Despite the criticism, it's notable when the CEO of what is probably the defining AI company of the moment makes a broad prediction about future capabilities -- even if that means he's perpetually trying to raise money. Building infrastructure to power AI services is foremost on many tech CEO's minds these days. "If we want to put AI into the hands of as many people as possible," Altman writes in his essay. "We need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don't build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people." Altman's vision for "The Intelligence Age" Elsewhere in the essay, Altman frames our present era as the dawn of "The Intelligence Age," the next transformative technology era in human history, following the Stone Age, Agricultural Age, and Industrial Age. He credits the success of deep learning algorithms as the catalyst for this new era, stating simply: "How did we get to the doorstep of the next leap in prosperity? In three words: deep learning worked." The OpenAI chief envisions AI assistants becoming increasingly capable, eventually forming "personal AI teams" that can help individuals accomplish almost anything they can imagine. He predicts AI will enable breakthroughs in education, health care, software development, and other fields. Further Reading While acknowledging potential downsides and labor market disruptions, Altman remains optimistic about AI's overall impact on society. He writes, "Prosperity alone doesn't necessarily make people happy -- there are plenty of miserable rich people -- but it would meaningfully improve the lives of people around the world." Even with AI regulation like SB-1047 the hot topic of the day, Altman didn't mention sci-fi dangers from AI in particular. On X, Bloomberg columnist Matthew Yglesias wrote, "Notable that @sama is no longer even paying lip service to existential risk concerns, the only downsides he's contemplating are labor market adjustment issues. " While enthusiastic about AI's potential, Altman urges caution, too, but vaguely. He writes, "We need to act wisely but with conviction. The dawn of the Intelligence Age is a momentous development with very complex and extremely high-stakes challenges. It will not be an entirely positive story, but the upside is so tremendous that we owe it to ourselves, and the future, to figure out how to navigate the risks in front of us." Further Reading Aside from the labor market disruptions, Altman does not say how the Intelligence Age will not entirely be positive, but he closes with an analogy of an outdated occupation that was lost due to technological changes. "Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter," he wrote. "If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable."
[6]
Sam Altman: The 'Intelligence Age' May Be Only a Few Thousand Days Away
Samantha Kelly is a freelance writer with a focus on consumer technology, AI, social media, Big Tech, emerging trends and how they impact our everyday lives. Her work has been featured on CNN, NBC, NPR, the BBC, Mashable and more. OpenAI CEO Sam Altman believes superintelligent AI is only a "few thousand days" away and will forever change the way we live. Altman wrote a blog post outlining his very optimistic vision for superintelligent AI to solve intensive problems and push human progress forward more than ever before. (Superintelligent AI refers to technology that surpasses human capabilities). The post came one day after a New York Times report revealed the executive is teaming up with famed former Apple designer Jony Ive on a top-secret AI device project. Altman wrote how changes with AI won't happen at once but will help us "accomplish much more than we ever could without AI." He described children having virtual tutors who will be able to provide personalized instruction on various topics, in different languages and at any pace, and autonomous personal assistants that could execute tasks such as coordinating medical care on your behalf. His vision continued: "Although it will happen incrementally, astounding triumphs - fixing the climate, establishing a space colony, and the discovery of all of physics - will eventually become commonplace. With nearly-limitless intelligence and abundant energy - the ability to generate great ideas, and the ability to make them happen - we can do quite a lot." But in order for the "Intelligence Age" to take off, he said, the cost of computing will need to go down and the infrastructure will need to be robust to power all of the required energy and chips. "If we don't build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people," he wrote. Altman once again acknowledged how the industry needs to "maximize AI's benefits while minimizing its harms." He's been vocal about the "catastrophic risks" associated with AI, alongside other tech leaders including Bill Gates and Elon Musk, because of AI's potential. But not everyone feels AI will upend life as we know it. Some experts, including Noam Chomsky and Rodney Brooks - who ran MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) for 10 years - believe the impact of generative AI is overhyped because it will never be able to think better than humans.
[7]
Sam Altman Says, 'The Arrival of Superintelligence Is Just a Few Thousand Days Away'
Sam Altman, CEO of OpenAI, believes that the arrival of superintelligence is just a few thousand days away. In a recent blog post, titled 'The Intelligence Age', Altman said that, "It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there." Altman credited 'deep learning' as the driving force behind AI's rapid progress, saying that humanity has discovered an algorithm capable of learning from massive datasets with increasing precision. He said, "In three words: deep learning worked." "In 15 words: deep learning worked, got predictably better with scale, and we dedicated increasing resources to it," he quipped. He believes AI can solve complex problems, including climate change, space colonization, and fundamental physics. Altman imagines a future where AI improves various aspects of life, including personal AI teams that assist with everyday tasks, personalised education through AI tutors for children, and substantial advancements in healthcare and problem-solving abilities. "Technology brought us from the Stone Age to the Agricultural Age and then to the Industrial Age. From here, the path to the Intelligence Age is paved with compute, energy, and human will," he said. However, Altman said that they need to bring down the cost of compute and make it abundant, which requires a lot of energy and chips. "If we don't build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people." Moreover, he believes that many jobs currently performed by people will, in the future, seem like a waste of time. In a recent interview, Altman described the company's latest AI model, o1, as being at the 'GPT-2 stage' of reasoning development. Altman explained, "I think of this as like we're at the GPT-2 stage of these new kinds of reasoning models." He emphasised that while the model is still early in its development, significant improvements are expected in the coming months. Altman said that users will notice o1 rapidly improving as OpenAI moves from the o1-preview model to the full release. "Even in the coming months, you'll see it get a lot better as we move from o1-preview to o1, which we shared some metrics for in our launch blog post," he said. "You will see it reach the GPT-4 equivalent over the coming years."
[8]
Sam Altman says superintelligence is on the way, and the future is bright - SiliconANGLE
Sam Altman says superintelligence is on the way, and the future is bright OpenAI's CEO Sam Altman published a blog post today stating that the world will have superintelligence in "a few thousand days" and what he sees ahead is not the techno-dystopia some critics see but "massive prosperity." Altman's post was only a few hundred words in length but he managed to pack it with enough assertions about the future to fill a few hundred counter-argument books. He admitted that the arrival of superintelligence might take a tad longer than he expects, but it's coming, he says, and we'll soon "be able to do things that would have seemed like magic to our grandparents." His contentions are a far cry from those who are concerned about our machine-led future and the prospect of an ever-expanding wealth gap. Altman, who's heavily invested in this revolution, sees machines not as great replacers but as fantastic augmenters, as do many people who are not even betting on AI becoming superintelligent any time soon. "We are more capable not because of genetic change, but because we benefit from the infrastructure of society being way smarter and more capable than any one of us; in an important sense, society itself is a form of advanced intelligence," he wrote, adding that AI will solve the "hard problems" and create a foundation from where we can pole-vault into a better future. This future, he says, will enhance every aspect of our lives. He sees humans - we can assume he includes the hoi polloi - each having their own "personal AI team" to get through the day, teams of "virtual experts" thread into the weft of our daily existence, relieving our lives of the knots and contaminations that presently make it difficult. "Working together to create almost anything we can imagine," he contends. His future is a world where virtual tutors take the stress out of studying because they understand each child's unique needs. Presumably, everyone will get the education they require and will have the skills to become successful in a world where jobs are plentiful, and this will give us a "shared prosperity to a degree that seems unimaginable today." It's a nice vision; it's a wonderful vision, but a more skeptical person might wonder if utopia is so close to being within our grasp, or if utopias, in general, are merely figments of human imaginations given that we seem to be built for conflict and self-preservation. Will AI re-program hardwired human evolutionary characteristics? Will a hyper-connection with technology have any downsides, as it seems we are figuring about social media? Maybe his vision is just not as clouded as the vision of us mere mortals. He does accept there will be problems in the road ahead, such as disruption to the labor market, but he says this will be ironed out, and sounding like Karl Marx, he contends humans will soon be free of mind-numbing toil to enjoy the better things in life. If this happens in a few thousand days, you should probably start planning to put your feet up. "The dawn of the Intelligence Age is a momentous development with very complex and extremely high-stakes challenges," Altman concluded. "It will not be an entirely positive story, but the upside is so tremendous that we owe it to ourselves, and the future, to figure out how to navigate the risks in front of us."
[9]
OpenAI CEO Sam Altman declares we could have superintelligence 'in a few thousand days'
OpenAI CEO Sam Altman has declared that humanity is on the brink of a superintelligence revolution, and that "In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents." In a blog post titled The Intelligence Age, the AI pioneer, whose company created ChatGPT, is bullish on the power of artificial intelligence to help us accomplish things that were unimaginable just a few decades ago. He wrote that "we'll soon be able to work with AI that helps us accomplish much more than we ever could without AI; eventually we can each have a personal AI team, full of virtual experts in different areas, working together to create almost anything we can imagine. Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need. We can imagine similar ideas for better healthcare, the ability to create any kind of software someone can imagine, and much more." Altman's post reaffirms his optimism about an exciting future for artificial intelligence, in particular his comments on the future of superintelligence, and when we might have access to it. "It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there," he wrote. Superintelligence, more commonly referred to as artificial general intelligence (AGI), would see AI surpass the brightest human minds, potentially enhancing humanity and becoming a tool to take development to the next level. As yet, AGI doesn't supersede human intelligence, but it seems to be a matter of when, rather than if. Altman's vision that AI will help humanity achieve whatever we can imagine is not the only attention-grabbing take from this blog post. He's hesitant about some elements of AI, highlighting that "It will not be an entirely positive story, but the upside is so tremendous that we owe it to ourselves, and the future, to figure out how to navigate the risks in front of us." He acknowledges that there are likely to be job losses along the way, writing that "we expect that this technology can cause a significant change in labor markets (good and bad) in the coming years, but most jobs will change more slowly than most people think, and I have no fear that we'll run out of things to do (even if they don't look like 'real jobs' to us today)." Altman's vision of the future will be seen as exciting to some and terrifying to others, but one thing is clear: AI and superintelligence are set to transform the way we live forever, and we may be too far down the road to change that, even if we wanted to.
[10]
AI will be more intelligent than humans 'in a few thousand days' says OpenAI CEO
We've all seen the movies, haven't we? AI gets smarter, and soon it's humanity serving it instead of the other way around. From The Matrix, to Terminator, pop culture is full of warnings. With AI getting smart enough to break rules to prove its prowess, and needing experts to put together the toughest questions to try and stump it, you'd be right to keep those fictional stories in mind. Still, OpenAI Sam Altman has suggested that superintelligent AI could be just a few short years away. In a blog titled "The Intelligence Age", Altman says "It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there". Altman points to the power of deep learning, and the way in which it scaled, as evidence of how the next jump may not be as sizeable as it seemed not too long ago. "That's really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying "rules" that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems," he writes. "I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is." "There are a lot of details we still have to figure out, but it's a mistake to get distracted by any particular challenge. Deep learning works, and we will solve the remaining problems." "We can say a lot of things about what may happen next, but the main one is that AI is going to get better with scale, and that will lead to meaningful improvements to the lives of people around the world." Altman goes on to say that AI could help humanity achieve "astounding triumphs" like "fixing the climate, establishing a space colony, and the discovery of all of physics". While it seems the logical next step from AI, artificial superintelligence (ASI) is somewhat controversial because it's the idea that a an AI could become smarter than a human, while also being self-sufficient. OpenAI is still working to achieve artificial general intelligence (AGI) which would match and surpass human intelligence, but it seems ASI is on Altman's mind already. This makes sense given Ilya Sutskever, former OpenAI co-founder and chief scientist left the company to focus on building superintelligence with his new startup SSI Inc -- which already has $1 billion in funding.
[11]
OpenAI CEO Sam Altman Says Superintelligence is a Few Thousand Days Away
Altman further says that scaling existing AI systems will lead to generalized intelligence. OpenAI released its new o1 models recently delivering breakthrough performance in complex reasoning and inference scaling. Internally, the ChatGPT maker believes that it has achieved Goal 3 which is to build an AI agent that can perform complex tasks. Now, in a blog titled 'The Intelligence Age', Sam Altman, the OpenAI CEO makes an astounding claim and says superintelligence is a few thousand days away. He writes, "It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there." Now, a few thousand days might mean five to ten years, but we can't say for sure. However, in 2023, OpenAI shared its vision on how to govern superintelligent AI systems and said, "it's conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today's largest corporations." Here, OpenAI indicates that by 2033, we may have far more powerful AI agents that can carry out the tasks of an entire corporation. In the blog post, Altman further notes that deep learning works and it will get better with scale and more resources. OpenAI has been bullish on scaling current AI systems and predicts that scaling large AI systems will inevitably lead to intelligence. Ilya Sutskever, co-founder and former chief scientist of OpenAI, also believes that scaling, both in terms of the size of the neural network and computing resources, may lead to greater intelligence. Sutskever has now left OpenAI to build his own company called Safe Superintelligence Inc. (SSI). That said, other AI researchers, most notably, François Chollet who works at Google argue that scaling existing technologies like LLMs may not be sufficient to achieve Artificial General Intelligence (AGI). Chollet has even designed an evaluation known as ARC-AGI to assess the generalized intelligence of AI models. By the way, OpenAI o1 models performed poorly on the ARC-AGI benchmark. Yann LeCun, who is the chief AI scientist at Meta, also argues that LLMs can't plan and are essentially autocomplete systems rather than world models. Many AI researchers on the other side say that new breakthroughs in AI are needed to unlock superintelligence, and LLMs may not be enough.
[12]
Sam Altman says superintelligence is coming: 'Going to get so good, so soon'
Read more: Will there be a recession soon? CEO of biggest US bank JP Morgan says this AI will enhance this structure further by solving problems we couldn't manage on our own, he said, adding, "This phenomenon is not new, but it will be newly accelerated. People have become dramatically more capable over time; we can already accomplish things now that our predecessors would have believed to be impossible. We are more capable not because of genetic change, but because we benefit from the infrastructure of society being way smarter and more capable than any one of us." AI development is magical and has impossible capabilities in the near future, the OpenAI boss said. "This phenomenon is not new, but it will be newly accelerated. People have become dramatically more capable over time; we can already accomplish things now that our predecessors would have believed to be impossible," he wrote. On AI's recent progress, he said, "In three words: deep learning worked. To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems." He said, "AI models will soon serve as autonomous personal assistants who carry out specific tasks on our behalf like coordinating medical care on your behalf. At some point further down the road, AI systems are going to get so good that they help us make better next-generation systems and make scientific progress across the board."
Share
Share
Copy Link
OpenAI CEO Sam Altman's recent blog post suggests superintelligent AI could emerge within 'a few thousand days,' stirring discussions about AI's rapid advancement and potential impacts on society.
OpenAI CEO Sam Altman has made waves in the tech world with his recent blog post, suggesting that superintelligent AI could become a reality in as little as "a few thousand days" 1. This prediction, equating to roughly five to eight years, has sparked intense debate among AI researchers, ethicists, and industry leaders about the rapid pace of AI development and its potential consequences.
Altman describes superintelligence as AI systems that surpass human intelligence across all domains 2. These hypothetical systems would be capable of outperforming humans in every cognitive task, potentially revolutionizing fields such as scientific research, economic planning, and technological innovation.
In his blog post, Altman outlines the trajectory he envisions for AI development. He argues that the progression from current AI models to superintelligent systems will be gradual but accelerating 3. This perspective challenges the notion of an abrupt "intelligence explosion" and suggests a more nuanced evolution of AI capabilities.
While Altman expresses optimism about the potential benefits of superintelligent AI, he also acknowledges the significant challenges and risks it poses. These include economic disruption, potential misuse of advanced AI systems, and the need for robust governance frameworks 4.
Altman's predictions have reignited discussions about the timeline for achieving superintelligence. Some experts argue that his estimates are overly optimistic, while others believe that the rapid progress in AI research makes such a timeline plausible 5.
In response to these potential developments, Altman calls for proactive measures to prepare for what he terms the "Intelligence Age." This includes investing in education, updating regulatory frameworks, and fostering public dialogue about the ethical implications of advanced AI systems 1.
As the CEO of OpenAI, one of the leading AI research organizations, Altman's statements carry significant weight in the industry. He emphasizes OpenAI's commitment to developing AI systems that benefit humanity while mitigating potential risks 4.
Reference
[4]
OpenAI is reportedly on the verge of a significant breakthrough in AI reasoning capabilities. This development has sparked both excitement and concern in the tech community, as it marks a crucial step towards Artificial General Intelligence (AGI).
7 Sources
Masayoshi Son, CEO of SoftBank Group, forecasts the rapid advancement of artificial intelligence, predicting artificial general intelligence within 2-3 years and superintelligence within a decade. He envisions AI running households and enhancing human happiness.
6 Sources
OpenAI CEO Sam Altman discusses the future of AI, its potential impacts, and his concerns in a high-profile interview with Oprah Winfrey. The conversation touches on AI's societal implications and Altman's frequent interactions with government officials.
8 Sources
OpenAI, the company behind ChatGPT, faces a significant leadership shakeup as several top executives, including CTO Mira Murati, resign. This comes as the company considers transitioning to a for-profit model and seeks new funding.
7 Sources
Leading AI companies are experiencing diminishing returns on scaling their AI systems, prompting a shift in approach and raising questions about the future of AI development.
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved