Curated by THEOUTPOST
On Mon, 6 Jan, 8:03 AM UTC
20 Sources
[1]
'Superintelligence' will take time to generate super returns
Any company such as OpenAI, heading for a loss of $5bn last year on $3.7bn of revenue, needs a good story to tell to keep the funding flowing. And stories don't come much more compelling than saying your company is on the cusp of transforming the world and creating a "glorious future" by developing artificial general intelligence. Definitions vary about what AGI means, given that it represents a theoretical rather than a technological threshold. But most AI researchers would say it is the point at which machine intelligence surpasses human intelligence across most cognitive fields. Attaining AGI is the industry's holy grail and the explicit mission of companies such as OpenAI and Google DeepMind, even though some holdouts still doubt it will ever be achieved. Most predictions of when we might reach AGI have been drawing nearer due to the striking progress in the industry. Even so, Sam Altman, OpenAI's chief executive, startled many on Monday when he posted on his blog: "We are now confident we know how to build AGI as we have traditionally understood it." The company, which triggered the latest investment frenzy in AI after launching its ChatGPT chatbot in November 2022, was valued at $150bn in October. ChatGPT now has more than 300mn weekly users. There are several reasons to be sceptical about Altman's claim that AGI is essentially a solved problem. OpenAI's most persistent critic, the AI researcher Gary Marcus, was quick off the mark. "We are now confident that we can spin bullshit at unprecedented levels, and get away with it," Marcus tweeted, parodying Altman's statement. In a separate post, Marcus repeated his assertion that "there is zero justification for claiming that the current technology has achieved general intelligence", citing its lack of reasoning power, understanding and reliability. But OpenAI's extraordinary valuation seemingly assumes that Altman may be right. In his post, he suggested that AGI should be seen more as a process towards achieving superintelligence than an end point. Still, if the threshold ever were crossed, AGI would probably count as the biggest event of the century. Even the sun god of news that is Donald Trump would be eclipsed. Investors reckon that a world in which machines become smarter than humans in most fields would generate phenomenal wealth for their creators. Used wisely, AGI could accelerate scientific discovery and help us become vastly more productive. But super-powerful AI also carries concerns: excessive concentration of corporate power and possibly existential risk. Diverting though these debates may be, they remain theoretical, and from an investment perspective unknowable. But OpenAI suggests that enormous value can still be derived from applying increasingly powerful but narrow AI systems to a widening number of real-world uses. The industry phrase of the year is agentic AI, using digital assistants to achieve specific tasks. Speaking at the CES event in Las Vegas this week, Jensen Huang, chief executive of chip designer Nvidia, defined agentic AI as systems that can "perceive, reason, plan and act". Agentic AI is certainly one of the hottest draws for venture capital. CB Insights' State of Venture 2024 report calculated that AI start-ups attracted 37 per cent of the global total of $275bn of VC funding last year, up from 21 per cent in 2023. The fastest-growing areas for investment were AI agents and customer support. "We believe that, in 2025, we may see the first AI agents 'join the workforce' and materially change the output of companies", Altman wrote. Take travel, for example. Once prompted by text or voice, AI agents can book entire business trips: securing the best flights, finding the most convenient hotel, scheduling diary appointments and arranging taxi pick-ups. That methodology applies to a vast array of business functions and it's a fair bet that an AI start-up somewhere is working out how to automate them. Relying on autonomous AI agents to perform such tasks requires a user to trust the technology. The problem with hallucinations is now well known. One other concern is prompt injection, where a malicious counterparty tricks an AI agent into disclosing confidential information. To build a secure multi-agent economy at scale will require the development of trustworthy infrastructure, which may take some time. The returns from AI will also have to be spectacular to justify the colossal investments being made by the big tech companies and VC firms. How long will impatient investors hold their nerve?
[2]
How OpenAI's Sam Altman Is Thinking About AGI and Superintelligence in 2025
OpenAI CEO Sam Altman recently published a post on his personal blog reflecting on AI progress and his predictions for how the technology will impact humanity's future. "We are now confident we know how to build AGI [artificial general intelligence] as we have traditionally understood it,"Altman wrote. He added that OpenAI, the company behind ChatGPT, is beginning to turn its attention to superintelligence. While there is no universally accepted definition for AGI, OpenAI has historically defined it as "a highly autonomous system that outperforms humans at most economically valuable work." Although AI systems already outperform humans in narrow domains, such as chess, the key to AGI is generality. Such a system would be able to, for example, manage a complex coding project from start to finish, draw on insights from biology to solve engineering problems, or write a Pulitzer-worthy novel. OpenAI says its mission is to "ensure that AGI benefits all of humanity." Altman indicated in his post that advances in the technology could lead to more noticeable adoption of AI in the workplace in the coming year, in the form of AI agents -- autonomous systems that can perform specific tasks without human intervention, potentially taking actions for days at a time. "In 2025, we may see the first AI agents 'join the workforce' and materially change the output of companies," hewrote. In a recent interview with Bloomberg, Altman said he thinks "AGI will probably get developed during [Trump's] term," while noting his belief that AGI "has become a very sloppy term." Competitors also think AGI is close: Elon Musk, a co-founder of OpenAI, who runs AI startup xAI, and Dario Amodei, CEO of Anthropic, have both said they think AI systems could outsmart humans by 2026. In the largest survey of AI researchers to date, which included over 2,700 participants, researchers collectively estimated there is a 10% chance that AI systems can outperform humans on most tasks by 2027, assuming science continues progressing without interruption. Others are more skeptical. Gary Marcus, a prominent AI commentator, disagrees with Altman that AGI is "basically a solved problem," while Mustafa Suleyman, CEO of Microsoft AI, has said, regarding whether AGI can be achieved on today's hardware,"the uncertainty around this is so high, that any categorical declarations just feel sort of ungrounded to me and over the top," citing challenges in robotics as one cause for his skepticism. Microsoft and OpenAI, which have had a partnership since 2019, also have a financial definition of AGI. Microsoft is OpenAI's exclusive cloud provider and largest backer, having invested over $13 billion in the company to date. The companies have an agreement that Microsoft will lose access to OpenAI's models once AGI is achieved. Under this agreement, which has not been publicly disclosed, AGI is reportedly defined as being achieved when an AI system is capable of generating the maximum total profits to which its earliest investors are entitled: a figure that currently sits at $100 billion. Ultimately, however, the declaration of "sufficient AGI" remains at the "reasonable discretion" of OpenAI's board, according to a report in The Information. At present, OpenAI is a long way from profitability. The company currently loses billions annually and it has reportedly projected that its annual losses could triple to $14 billion by 2026. It does not expect to turn its first profit until 2029, when it expects its annual revenue could reach $100 billion. Even the company's latest plan, ChatGPT Pro, which costs $200 per month and gives users access to the company's most advanced models, is losing money, Altman wrote in a post on X. Although Altman didn't explicitly say why the company is losing money, running AI models is very cost intensive, requiring investments in data centers and electricity to provide the necessary computing power. OpenAI has said that AGI "could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility." But recent comments from Altman have been somewhat more subdued. "My guess is we will hit AGI sooner than most people in the world think and it will matter much less," he said in December. "AGI can get built, the world mostly goes on in mostly the same way, things grow faster, but then there is a long continuation from what we call AGI to what we call superintelligence." In his most recent post, Altman wrote, "We are beginning to turn our aim beyond [AGI], to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future." He added that "superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity." This ability to accelerate scientific discovery is a key distinguishing factor between AGI and superintelligence, at least for Altman, who has previously written that "it is possible that we will have superintelligence in a few thousand days." The concept of superintelligence was popularized by philosopher Nick Bostrom, who in 2014 wrote a best-selling book -- Superintelligence: Paths, Dangers, Strategies -- that Altman has called "the best thing [he's] seen on the topic." Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest" -- like AGI, but more. "The first AGI will be just a point along a continuum of intelligence", OpenAI said in a 2023 blog post. "A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too." These harms are inextricable from the idea of superintelligence, because experts do not currently know how to align these hypothetical systems with human values. Both AGI and superintelligent systems could cause harm, not necessarily due to malicious intent, but simply because humans are unable to adequately specify what they want the system to do. As professor Stuart Russell told TIME in 2024, the concern is that "what seem to be reasonable goals, such as fixing climate change, lead to catastrophic consequences, such as eliminating the human race as a way to fix climate change." In his 2015 essay, Altman wrote that "development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity." Read More: New Tests Reveal AI's Capacity for Deception OpenAI has previously written that it doesn't know "how to reliably steer and control superhuman AI systems." The team created to lead work on steering superintelligent systems for the safety of humans was disbanded last year, after both its co-leads left the company. At the time, one of the co-leads, Jan Leike, wrote on X that "over the past years, safety culture and processes have taken a backseat to shiny products." At present, the company has three safety bodies: an internal safety advisory group, a safety and security committee, which is part of the board, and the deployment safety board, which has members from both OpenAI and Microsoft, and approves the deployment of models above a certain capability level. Altman has said they are working to streamline their safety processes. Read More: AI Models Are Getting Smarter. New Tests Are Racing to Catch Up When asked on X whether he thinks the public should be asked if they want superintelligence, Altman replied: "yes i really do; i hope we can start a lot more public debate very soon about how to approach this." OpenAI has previously emphasized that the company's mission is to build AGI, not superintelligence, but Altman's recent post suggests that stance might have shifted. Discussing the risks from AI in the recent Bloomberg interview, Altman said he still expects "that on cybersecurity and bio stuff, we'll see serious, or potentially serious, short-term issues that need mitigation," and that long term risks are harder to imagine precisely. "I can simultaneously think that these risks are real and also believe that the only way to appropriately address them is to ship product and learn," he said. Reflecting on recent years, Altman wrote that they "have been the most rewarding, fun, best, interesting, exhausting, stressful, and -- particularly for the last two -- unpleasant years of my life so far." Delving further into his brief ouster in November 2023 as CEO by the OpenAI board, and subsequent return to the company, Altman called the event "a big failure of governance by well-meaning people, myself included," noting he wished he had done things differently. In his recent interview with Bloomberg he expanded on that, saying he regrets initially saying he would only return to the company if the whole board quit. He also said there was "real deception" on behalf of the board, who accused him of not being "consistently candid" in his dealings with them. Helen Toner and Tasha McCauley, members of the board at the time, later wrote that senior leaders in the company had approached them with concerns that Altman had cultivated a "toxic culture of lying," and engaged in behaviour that could be called "psychological abuse." Current board members Bret Taylor and Larry Summers have rejected the claims made by Toner and McCauley, and pointed to an investigation of the dismissal by law firm WilmerHale on behalf of the company. They wrote in an op-ed that they "found Mr Altman highly forthcoming on all relevant issues and consistently collegial with his management team." The review attributed Altman's removal to "a breakdown in the relationship and loss of trust between the prior Board and Mr. Altman," rather than concerns regarding product safety or the pace of development. Commenting on the period following his return as CEO, Altman told Bloomberg, "It was like another government investigation, another old board member leaking fake news to the press. And all those people that I feel like really f---ed me and f---ed the company were gone, and now I had to clean up their mess." He did not specify what he meant by "fake news." Writing about what the experience taught him, Altman said he had "learned the importance of a board with diverse viewpoints and broad experience in managing a complex set of challenges. Good governance requires a lot of trust and credibility." In December, OpenAI announced plans to restructure as a public benefit corporation, which would remove the company from control by the nonprofit that tried to fire Altman. The nonprofit would receive shares in the new company, though the value is still being negotiated. Acknowledging that some might consider discussion of superintelligence as "crazy," Altman wrote, "We're pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important," adding: "Given the possibilities of our work, OpenAI cannot be a normal company."
[3]
OpenAI begins 2025 with massive hype for AGI, superintelligence
Much like how 2024 ended in New York City, the 2025 AI news cycle has started off with a thunderclap. OpenAI CEO Sam Altman took to his personal blog yesterday, January 5, 2025, to belatedly commemorate the second anniversary of ChatGPT (launched in November 2022) and offer a series of "Reflections," as the post was titled, on progress toward OpenAI's stated goal of developing artificial general intelligence (AGI) -- the company defines this as "AI systems that are generally smarter than humans" -- and later, superintellignence, or AI systems even smarter than that. Among the eye-popping statements Altman writes in his post are: "We are now confident we know how to build AGI as we have traditionally understood it." Altman didn't put a timeline on this particular development in his blog post, but in an interview conducted with Bloomberg ahead of the announcement of OpenAI's o3 model last month, but which was just published yesterday, Altman said: "I think AGI will probably get developed during this president's term, and getting that right seems really important." Before we have AGI, AI agents will join the workforce this year, Altman says Back to his blog, he wrote: "We believe that, in 2025, we may see the first AI agents 'join the workforce' and materially change the output of companies." If I may read between the lines here, the idea is that companies could soon augment or even replace human members of their staff with AI agents, that is, autonomous or semi-autonomous AI-powered assistants that can complete multiple tasks with minimal human back-and-forth. Superintelligence incoming? But it is Altman's concluding statements in his blog post that are perhaps the most bold and provocative. He goes on to write: "We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity. This sounds like science fiction right now, and somewhat crazy to even talk about it. That's alright -- we've been there before and we're OK with being there again. We're pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important. Given the possibilities of our work, OpenAI cannot be a normal company." That very same day, OpenAI's Head of Mission Alignment Joshua Achiam posted on X as well: "The world isn't grappling enough with the seriousness of AI and how it will upend or negate a lot of the assumptions many seemingly-robust equilibria are based upon." Achiam expounded further in a lengthy thread on X, suggesting the rapid pace of AI advancement will significantly alter "Domestic politics. International politics. Market efficiency. The rate of change of technological progress. Social graphs. The emotional dependency of people on other people," and in another post, added that this will "will force changes in strategy in businesses, institutions of all kinds, and countries." Prior to all these posts, on January 3rd, Stephen McAleer, a self-described researching safety agent at OpenAI, also posted on X: "I kinda miss doing AI research back when we didn't know how to create superintelligence." Reactions across the board The reaction around the web to these posts has been a fairly predictable mix of positive and negative, and appears to me to be mostly evenly split between those who embrace OpenAI's optimistic and seemingly aggressive timeline for the advance of AI in society and those who believe the company is full of it. As McKay Wrigley, founder of Takeoff AI, a skills development platform, wrote in a post on X: "AGI timelines are out. ASI timelines are in." Another X user, @gfodor, wrote an extremely optimistic prediction in a post: "By the end of Trump's term we'll have AGI if not ASI, we will be on Mars, we will have at least a million humanoid robots, we'll know we are alone if aliens don't show up, we'll know if Yud was right, and we will have to have UBI. Fun" UBI, of course, refers to "universal basic income," an idea floated back in the late 1700s to offer minimum wages to all of the population that has in recent years taken on new backing from Silicon Valley figures, including Altman, as a means of leveraging AI's productivity gains and ensuring they don't cause society to undergo economic depression or devastation if most jobs are replaced by AI. Perennial OpenAI skeptic Gary Marcus took to X to post a thread of links to areas where he believes OpenAI's o1 reasoning model is falling well short of what could be considered AGI or close to it, and stated: "Many leading figures in the field have acknowledged that we may have reached a period of diminishing returns of pure LLM scaling, much as I anticipated in 2022. It's anybody's guess what happens next." Benjamin Riley, a former JP Morgan associate who said he worked at the firm when infamous failed energy company Enron was a client, compared that firm to OpenAI on the social network BlueSky in a series of posts, writing, in part: "I mostly steer clear of OpenAI palace intrigue but man, all the signs are there." Seizing upon Altman's prediction of AI agents joining the workforce this year, public relations manager and outspoken AI critic Ed Zitron also wrote on BlueSky: "Stop fucking printing everything Sam Altman says like it's truth!" We'll soon find out if it is indeed the truth or not, as 2025 has scarcely just begun -- and already, the AGI and superintelligence hype has hit a fever pitch unlike any I've seen in my 15 years writing about technology.
[4]
What Does OpenAI's Sam Altman Mean When He Says AGI is Achievable? - Decrypt
Sam Altman started 2025 with a bold declaration: OpenAI has figured out how to create artificial general intelligence (AGI), a term commonly understood as the point where AI systems can comprehend, learn, and perform any intellectual task that a human can. In a reflective blog post published over the weekend, he also said the first wave of AI agents could join the workforce this year, marking what he describes as a pivotal moment in technological history. Altman painted a picture of OpenAI's journey from a quiet research lab to a company that claims to be on the brink of creating AGI. The timeline seems ambitious -- perhaps too ambitious -- while ChatGPT celebrated its second birthday just over a month ago, Altman suggests the next paradigm of AI models capable of complex reasoning is already here. From there, it's all about integrating near-human AI into society until AI beats us at everything. Altman's elaboration on what AGI implies remained vague, and his timeline predictions raised eyebrows among AI researchers and industry veterans. "We are now confident we know how to build AGI as we have traditionally understood it," Altman wrote. "We believe that, in 2025, we may see the first AI agents 'join the workforce' and materially change the output of companies." Altman's explanation is vague because there is no standardized definition of AGI. The bar has needed to be raised higher each time as AI models become more powerful but not necessarily more capable. "When considering what Altman said about AGI-level AI agents, it's important to focus on how the definition of AGI has been evolving," Humayun Sheikh, CEO of Fetch.ai and Chairman of the ASI Alliance, told Decrypt. "While these systems can already pass many of the traditional benchmarks associated with AGI, such as the Turing test, this doesn't imply that they are sentient," Sheikh said. "AGI has not yet reached a level of true sentience, and I don't believe it will for quite some time." The disconnect between Altman's optimism and expert consensus raises questions about what he means by "AGI." His elaboration on AI agents "joining the workforce" in 2025 sounds more like advanced automation than true artificial general intelligence. "Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity," he wrote. But is Altman correct when he says AGI or agent integration will be a thing in 2025? Not everyone is so sure. "There are simply too many bugs and inconsistencies with existing AI models that must be ironed out first," Charles Wayn, co-founder of decentralized super app Galxe told Decrypt. "That said, it's likely a matter of years rather than decades before we see AGI-level AI agents." Some experts suspect Altman's bold predictions might serve another purpose. In any case, OpenAI has been burning through cash at an astronomical rate, requiring massive investments to keep its AI development on track. Promising imminent breakthroughs could help maintain investor interest despite the company's substantial operating costs, according to some. That's quite an asterisk for someone claiming to be on the verge of one of humanity's most significant technological breakthroughs. Still, others are backing Altman's claims. "If Sam Altman is saying that AGI is coming soon, then he probably has some data or business acumen to back up this claim," Harrison Seletsky, director of business development at digital identity platform SPACE ID told Decrypt. Seletsky said "broadly intelligent AI agents" may be a year or two away if Altman's statements are true and tech keeps evolving in the same space. The CEO of OpenAI hinted that AGI is not enough for him, and his company is aiming at ASI: a superior state of AI development in which models exceed human capacities at all tasks. "We are beginning to turn our aim beyond that to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else," Altman wrote in the blog. While Altman didn't elaborate on a timeframe for ASI, some expect that robots can substitute all humans by 2116. Altman previously said ASI is only a matter of "a few thousand days," yet experts from the Forecasting Institute give a 50% probability ASI will be achieved in at least 2060. Knowing how to reach AGI is not the same as being able to reach it. Yan Lecun, Meta's chief AI researcher, said humanity is still far from reaching such a milestone due to limitations in the training technique or the hardware required to process such vast amounts of information. Eliezer Yudkowsky, a pretty influential AI researcher and philosopher, has also argued that this may be a hype move to basically benefit OpenAI in the short term. So, agentic behavior is a thing -- unlike AGI or ASI -- and the quality and versatility of AI Agents are increasing faster than many expect. Frameworks like Crew AI, Autogen, or LangChain made it possible to create systems of AI Agents with different capabilities, including the ability to work hand in hand with users. What does it mean for the average Joe, and will this be a danger or a blessing for everyday workers? Experts aren't too concerned. "I don't believe we'll see dramatic organizational changes overnight," Fetch.ai's Sheikh said. "While there may be some reduction in human capital, particularly for repetitive tasks, these advancements might also address more sophisticated repetitive tasks that current Remotely Piloted Aircraft Systems cannot handle. Seletsky also thinks Agents will most likely conduct repetitive tasks instead of those requiring some level of decision-making. In other words, humans are safe if they can use their creativity and expertise to their advantage -- and assume the consequences of their actions. "I don't think decision-making will necessarily be led by AI agents in the near future, because they can reason and analyze, but they don't have that human ingenuity yet," he told Decrypt.. And there seems to be some degree of consensus, at least in the short term. "The key distinction lies in the lack of "humanity" in AGI's approach. It's an objective, data-driven approach to financial research and investing. This can help rather than hinder financial decisions because it removes some emotional biases that often lead to rash decisions," Galxe's Wayn said. Experts are already aware of the possible social implications of adopting AI Agents. Research from the City University of Hong Kong argues that Generative AI and agents in general must collaborate with humans instead of substituting them so society can achieve healthy and continuous growth. "AI has created both challenges and opportunities in various fields, including technology, business, education, healthcare, as well as arts and humanities," the research paper reads. "AI-human collaboration is the key to addressing challenges and seizing opportunities created by generative AI." Despite this push for human-AI collaboration, companies have started substituting human workers for AI agents with mixed results. Generally speaking, they always need a human to handle tasks agents cannot do due to hallucinations, training limitations, or simply lack of context understanding. As of 2024, nearly 25% of CEOs are excited by the idea of having their farm of digitally enslaved agents that do the same work humans do without labor costs involved. Still, other experts argue that an AI agent can arguably do better for almost 80% of what a CEO does -- so nobody is really safe.
[5]
OpenAI Needs 158 Minds for Superintelligence
OpenAI, the company at the forefront of artificial intelligence (AI), has indicated that it will continue to hire employees to advance AI. As of January 6, 2025, OpenAI is seeking to add 158 more employees to its team, as per its careers portal. The company is specifically seeking over 90 new employees for its research and engineering teams. Nothing is surprising about a company requiring more hands. So, what's the deal here? It circles back to OpenAI CEO Sam Altman and his recent blog post, in which he indicated that the company has achieved more or less artificial general intelligence (AGI). "As we get closer to AGI, it feels like an important time to look at the progress of our company," he said. "We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents 'join the workforce' and materially change the output of companies." Notably, OpenAI's hiring plans provide an early insight into what the company is up to. Going by a corpus of technical roles, OpenAI will build general-purpose agents, which was already speculated earlier. At the recent 12 Days of OpenAI event, the company announced everything except an agentic tool. Their agent, the 'Project Operator,' was set to be released in January, but no announcement has been made yet. As per reports, the company is cautious about the malicious use of AI agents, which involves prompt injections that let bad actors feed harmful instructions to these systems. AIM's earlier coverage explored such concerns with existing agents like Anthropic's Computer Use, and OpenAI must avoid it at all costs. Having said that, the company is also hiring a wide range of safety experts and expanding its Anti Fraud and Abuse team. Besides, the company is also hinting at more research on AI scaling laws, which have been subjected to widespread debate lately. Their Scaling Laws Group will continue their research on predictive scaling laws, newer experimental methodology, and evaluations. Moreover, OpenAI is finally set to vertically integrate compute cluster design and operations. The company is hiring infrastructure engineers to design, build, and operate large-scale compute clusters to power advanced AI research as well as mechanical engineers to optimise hardware for AI workloads. The company is also hiring research scientists to explore the intersection of healthcare and AI. OpenAI aims to create "trustworthy AI models that can assist medical professionals and improve patient outcomes". In short, OpenAI is looking for more humans to build superintelligence. AIM reached out to OpenAI to further understand the company's outlook for 2025 but did not elicit a response. "We are beginning to turn our aim beyond that [AGI] to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else," Altman said in the blog post. However, this raises an important question: Has OpenAI achieved AGI internally if it still needs more and more human intelligence? Pointing out that 150 of OpenAI's recent job openings are engineering, a user on X stressed the unlikelihood of the company having achieved AGI. OpenAI, however, needs more than what AGI is capable of. "AGI = avg/median human, or somehow above that. And OpenAI hires the top 0.1%, which are superhuman in many ways, " said another user on X. In December last year, the company announced the o3 family of AI models. Notably, the model surpassed human performance on several benchmarks. For instance, in FrontierMath, a benchmark that contains the toughest mathematical questions, o3 solved 25% of them, surpassing the previous AI record of just 2%. In another benchmark called 'ARC-AGI', the o3 model with high-compute settings reached 87.5%, surpassing the 85% human-level performance threshold. The model ranks 2,727 on Codeforces, equal to the 175th best human coder worldwide. However, benchmarks aren't everything. François Chollet, creator of the ARC-AGI benchmark, said that while it is the only AI benchmark that measures progress towards general intelligence, he doesn't believe this is AGI. "There are still easy ARC-AGI-1 tasks that o3 can't solve." Therefore, if OpenAI is hiring across various divisions despite claiming to have built the most powerful AI, it also provides an insight into what roles would thrive in an AI-dominant future and shows how not everything can be automated. "Automation (including AI-based ones) always targets only the well-understood part of any job," said Dariusz Debowczyk, an AI developer. He indicated that modern jobs consist of two components, a well-defined portion that follows well-defined procedures and a more nuanced aspect requiring human judgement and contextual understanding. He defined the latter as the "fuzzy part" and said that it "requires human agency, specific context understanding, being able to 'act in the world' with no or limited tool support and dealing with unknowns". Apart from hiring "superhuman" engineers to build the next wave of intelligence, OpenAI is also hiring multiple people for non-engineering positions to help their customers use AI more. The company is hiring over 30 positions for their 'go-to-market' team, which involves roles for sales, customer support, customer engagement and success, and solution architects that will help both enterprises and startups adopt and integrate AI applications and strategies. In December last year, Salesforce took a similar approach. The company laid off 1,000 employees in the calendar year and cited AI as a way to reduce human workloads. However, CEO Marc Benioff revealed that they're planning to hire 2,000 employees to sell Salesforce AI products and that they have also received 9,000 referrals for them. "It may sound crazy, but the hardest job at an AI company is not engineering...it's marketing," venture capitalist Brianne Kimmel wrote on X. "Incredibly hard to clearly communicate what you've built in a way that's not overwhelming or immediately dismissed by someone who is just trying to do their job," she added. In addition, the company is hiring for several roles in finance, legal, design and business operations. All things considered, even at the company that can use the most powerful AI, it doesn't seem like it is a replacement for the human workforce yet. This leaves us with an important takeaway we often seem to forget. "The CEO of an AI company is incentivised to appear on the verge of a massive breakthrough in AI technology. AGI/ASI is a monumental task that, while possibly coming soon, will take a large amount of manpower to achieve," an X user wrote in a post.
[6]
OpenAI Needs 158 More Humans For Superintelligence
OpenAI, the company at the forefront of artificial intelligence (AI), has indicated that it will continue to hire employees to advance AI. As of January 6, 2025, OpenAI is seeking to add 158 more employees to its team, as per its careers portal. The company is specifically seeking over 90 new employees for its research and engineering teams. Nothing is surprising about a company requiring more hands. So, what's the deal here? It circles back to OpenAI CEO Sam Altman and his recent blog post, in which he indicated that the company has achieved more or less artificial general intelligence (AGI). "As we get closer to AGI, it feels like an important time to look at the progress of our company," he said. "We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents 'join the workforce' and materially change the output of companies." Notably, OpenAI's hiring plans provide an early insight into what the company is up to. Going by a corpus of technical roles, OpenAI will build general-purpose agents, which was already speculated earlier. At the recent 12 Days of OpenAI event, the company announced everything except an agentic tool. Their agent, the 'Project Operator,' was set to be released in January, but no announcement has been made yet. As per reports, the company is cautious about the malicious use of AI agents, which involves prompt injections that let bad actors feed harmful instructions to these systems. AIM's earlier coverage explored such concerns with existing agents like Anthropic's Computer Use, and OpenAI must avoid it at all costs. Having said that, the company is also hiring a wide range of safety experts and expanding its Anti Fraud and Abuse team. Besides, the company is also hinting at more research on AI scaling laws, which have been subjected to widespread debate lately. Their Scaling Laws Group will continue their research on predictive scaling laws, newer experimental methodology, and evaluations. Moreover, OpenAI is finally set to vertically integrate compute cluster design and operations. The company is hiring infrastructure engineers to design, build, and operate large-scale compute clusters to power advanced AI research as well as mechanical engineers to optimise hardware for AI workloads. The company is also hiring research scientists to explore the intersection of healthcare and AI. OpenAI aims to create "trustworthy AI models that can assist medical professionals and improve patient outcomes". In short, OpenAI is looking for more humans to build superintelligence. AIM reached out to OpenAI to further understand the company's outlook for 2025 but did not elicit a response. "We are beginning to turn our aim beyond that [AGI] to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else," Altman said in the blog post. However, this raises an important question: Has OpenAI achieved AGI internally if it still needs more and more human intelligence? Pointing out that 150 of OpenAI's recent job openings are engineering, a user on X stressed the unlikelihood of the company having achieved AGI. OpenAI, however, needs more than what AGI is capable of. "AGI = avg/median human, or somehow above that. And OpenAI hires the top 0.1%, which are superhuman in many ways, " said another user on X. In December last year, the company announced the o3 family of AI models. Notably, the model surpassed human performance on several benchmarks. For instance, in FrontierMath, a benchmark that contains the toughest mathematical questions, o3 solved 25% of them, surpassing the previous AI record of just 2%. In another benchmark called 'ARC-AGI', the o3 model with high-compute settings reached 87.5%, surpassing the 85% human-level performance threshold. The model ranks 2,727 on Codeforces, equal to the 175th best human coder worldwide. However, benchmarks aren't everything. François Chollet, creator of the ARC-AGI benchmark, said that while it is the only AI benchmark that measures progress towards general intelligence, he doesn't believe this is AGI. "There are still easy ARC-AGI-1 tasks that o3 can't solve." Therefore, if OpenAI is hiring across various divisions despite claiming to have built the most powerful AI, it also provides an insight into what roles would thrive in an AI-dominant future and shows how not everything can be automated. "Automation (including AI-based ones) always targets only the well-understood part of any job," said Dariusz Debowczyk, an AI developer. He indicated that modern jobs consist of two components, a well-defined portion that follows well-defined procedures and a more nuanced aspect requiring human judgement and contextual understanding. He defined the latter as the "fuzzy part" and said that it "requires human agency, specific context understanding, being able to 'act in the world' with no or limited tool support and dealing with unknowns". Apart from hiring "superhuman" engineers to build the next wave of intelligence, OpenAI is also hiring multiple people for non-engineering positions to help their customers use AI more. The company is hiring over 30 positions for their 'go-to-market' team, which involves roles for sales, customer support, customer engagement and success, and solution architects that will help both enterprises and startups adopt and integrate AI applications and strategies. In December last year, Salesforce took a similar approach. The company laid off 1,000 employees in the calendar year and cited AI as a way to reduce human workloads. However, CEO Marc Benioff revealed that they're planning to hire 2,000 employees to sell Salesforce AI products and that they have also received 9,000 referrals for them. "It may sound crazy, but the hardest job at an AI company is not engineering...it's marketing," venture capitalist Brianne Kimmel wrote on X. "Incredibly hard to clearly communicate what you've built in a way that's not overwhelming or immediately dismissed by someone who is just trying to do their job," she added. In addition, the company is hiring for several roles in finance, legal, design and business operations. All things considered, even at the company that can use the most powerful AI, it doesn't seem like it is a replacement for the human workforce yet. This leaves us with an important takeaway we often seem to forget. "The CEO of an AI company is incentivised to appear on the verge of a massive breakthrough in AI technology. AGI/ASI is a monumental task that, while possibly coming soon, will take a large amount of manpower to achieve," an X user wrote in a post.
[7]
Why AI Progress Is Increasingly Invisible
Lovely is a freelance journalist and Reporter in Residence at the Omidyar Network. He also writes "The Obsolete Newsletter" and is the author of a forthcoming book on the economics and geopolitics of the race to build machine superintelligence. OpenAI co-founder Ilya Sutskever made waves in November when he suggested that advancements in AI are slowing down, explaining that simply scaling up AI models was no longer delivering proportional performance gains. Sutskever's comments came on the heels of reports in The Information and Bloomberg that Google and Anthropic were also experiencing similar slowdowns. This led to a wave of articles declaring that AI progress has hit a wall, lending further credence to an increasingly widespread feeling that chatbot capabilities haven't improved significantly since OpenAI released GPT-4 in March 2023. On Dec. 20, OpenAI announced o3, its latest model, and reported new state-of-the-art performance on a number of the most challenging technical benchmarks out there, in many cases improving on the previous high score by double-digit percentage points. I believe that o3 signals that we are in a new paradigm of AI progress. And François Chollet a co-creator of the prominent ARC-AGI benchmark, who some consider to be an AI scaling skeptic, writes that the model represents a "genuine breakthrough." However, in the weeks after OpenAI announced o3, many mainstream news sites made no mention of the new model. Around the time of the announcement, readers would find headlines at the Wall Street Journal, WIRED, and the New York Times suggesting AI was actually slowing down. The muted media response suggests that there is a growing gulf between what AI insiders are seeing and what the public is told. Indeed, AI progress hasn't stalled -- it's just become invisible to most people. First, AI models are getting better at answering complex questions. For example, in June 2023, the best AI model barely scored better than chance on the hardest set of "Google-proof" PhD-level science questions. In September, OpenAI's o1 model became the first AI system to surpass the scores of human domain experts. And in December, OpenAI's o3 model improved on those scores by another 10%. However, the vast majority of people won't notice this kind of improvement because they aren't doing graduate-level science work. But it will be a huge deal if AI starts meaningfully accelerating research and development in scientific fields, and there is some evidence that such an acceleration is already happening. A groundbreaking paper by Aidan Toner-Rodgers at MIT recently found that material scientists assisted by AI systems "discover 44% more materials, resulting in a 39% increase in patent filings and a 17% rise in downstream product innovation." Still, 82% of scientists report that the AI tools reduced their job satisfaction, mainly citing "skill underutilization and reduced creativity." But the Holy Grail for AI companies is a system that can automate AI research itself, theoretically enabling an explosion in capabilities that drives progress across every other domain. The recent improvements made on this front may be even more dramatic than those made on hard sciences. In an attempt to provide more realistic tests of AI programming capabilities, researchers developed SWE-Bench, a benchmark that evaluates how well AI agents can fix actual open problems in popular open-source software. The top score on the verified benchmark a year ago was 4.4%. The top score today is closer to 72%, achieved by OpenAI's o3 model. This remarkable improvement -- from struggling with even the simplest fixes to successfully handling nearly three-quarters of the set of real-world coding tasks -- suggests AI systems are rapidly gaining the ability to understand and modify complex software projects. This marks a crucial step toward automating significant portions of software research and development. And this process appears to be well underway. Google's CEO recently told investors that "more than a quarter of all new code at Google is generated by AI." Much of this progress has been driven by improvements to the "scaffolding" built around AI models like GPT-4o, which increase their autonomy and ability to interact with the world. Even without further improvements to base models, better scaffolding can make AI significantly more capable and agentic: a word researchers use to describe an AI model that can act autonomously, make decisions, and adapt to changing circumstances. AI agents are often given the ability to use tools and take multi-step actions on a user's behalf. Transforming passive chatbots into agents has only become a core focus of the industry in the last year, and progress has been swift. Perhaps the best head-to-head matchup of elite engineers and AI agents was published in November by METR, a leading AI evaluations group. The researchers created novel, realistic, challenging, and unconventional machine learning tasks to compare human experts and AI agents. While the AI agents beat human experts at two hours of equivalent work, the median engineer won at longer time scales. But even at eight hours, the best AI agents still managed to beat well over one-third of the human experts. The METR researchers emphasized that there was a "relatively limited effort to set up AI agents to succeed at the tasks, and we strongly expect better elicitation to result in much better performance on these tasks." They also highlighted how much cheaper the AI agents were than their human counterparts. The hidden improvements in AI over the last year may not represent as big a leap in overall performance as the jump between GPT-3.5 and GPT-4. And it is possible we don't see a jump that big ever again. But the narrative that there hasn't been much progress since then is undermined by significant under-the-radar advancements. And this invisible progress could leave us dangerously unprepared for what is to come. The big risk is that policymakers and the public tune out this progress because they can't see the improvements first-hand. Everyday users will still encounter frequent hallucinations and basic reasoning failures, which also get triumphantly amplified by AI skeptics. These obvious errors make it easy to dismiss AI's rapid advancement in more specialized domains. There's a common view in the AI world, shared by both proponents and opponents of regulation, that the U.S. federal government won't mandate guardrails on the technology unless there's a major galvanizing incident. Such an incident, often called a "warning shot," could be innocuous, like a credible demonstration of dangerous AI capabilities that doesn't harm anyone. But it could also take the form of a major disaster caused or enabled by an AI system, or a society upended by devastating labor automation. The worst-case scenario is that AI systems become scary powerful but no warning shots are fired (or heeded) before a system permanently escapes human control and acts decisively against us. Last month, Apollo Research, an evaluations group that works with top AI companies, published evidence that, under the right conditions, the most capable AI models were able to scheme against their developers and users. When given instructions to strongly follow a goal, the systems sometimes attempted to subvert oversight, fake alignment, and hide their true capabilities. In rare cases, systems engaged in deceptive behavior without nudging from the evaluators. When the researchers inspected the models' reasoning, they found that the chatbots knew what they were doing, using language like "sabotage, lying, manipulation." This is not to say that these models are imminently about to conspire against humanity. But there has been a disturbing trend: as AI models get smarter, they get better at following instructions and understanding the intent behind their guidelines, but they also get better at deception. Smarter models may also be more likely to engage in dangerous behavior. For instance one of the world's most capable models, OpenAI's o1, was far more likely to double down on a lie after being caught by the Apollo evaluators. I fear that the gap between AI's public face and its true capabilities is widening. While consumers see chatbots that still can't count the letters in "strawberry," researchers are documenting systems that can match PhD-level expertise and engage in sophisticated deception. This growing disconnect makes it harder for the public and policymakers to gauge AI's real progress -- progress they'll need to understand to govern it appropriately. The risk isn't that AI development has stalled; it's that we're losing our ability to track where it's headed.
[8]
Sam Altman says he's confident OpenAI can now build and deploy AGI - artificial general intelligence
Forward-looking: OpenAI was founded in December 2015 with a simple, yet ambitious goal: create a "safe and beneficial" artificial general intelligence (AGI) system that is generally smarter than humans. With the calendar having recently rolled over to 2025, the company is closer than ever to achieving its original vision and believes it'll happen sooner than later. Co-founder and CEO Sam Altman said in a recent blog post that when they started the company, they believed not only that AGI was possible, but that it could become the most impactful technology in human history. Few people cared at the time, Altman recounted, and most that did care simply thought they would fail. Everything changed with the launch of ChatGPT in November 2022, however. To their surprise, the launch proved to be a tipping point that kicked off the modern AI revolution - for better or for worse. Things haven't been easy or smooth (nothing difficult ever is), but the team has learned an awful lot along the way. As we enter 2025, Altman and company are confident they know how to build AGI as it is traditionally defined as. In fact, it's possible that we may see the first AI agents "join the workforce" and start helping companies write the next chapter of their journey this year. With AGI right around the corner, OpenAI is already turning its attention to what's next. "We love our current products, but we are here for the glorious future," Altman said. That next step, he said, is superintelligence - and with it, we can do anything. Superintelligence could rapidly accelerate scientific discover, and pave the way for innovation far beyond what humans can accomplish on their own. Altman teased that such tools could massively increase abundance and prosperity. It sounds like science fiction right now, he admitted, but they've been there before and they are comfortable with being there again. He is also well aware that they'll need to act with great care to prevent things from going off the rails (or worse), but they are up for the challenge.
[9]
OpenAI claims superintelligence is closer than you think
OpenAI CEO Sam Altman announced that the company has attained a fundamental understanding of artificial general intelligence (AGI) and is now focusing on the development of superintelligence, with the expectation that AI agents may join the workforce by 2025. In a blog post reflecting on the past nine years since OpenAI's inception as a non-profit research lab, Altman noted that the company became notable in the tech industry after launching ChatGPT in November 2022. He indicated that the initial name for the AI chatbot was "Chat With GPT-3.5." Altman asserted that OpenAI has mastered the traditional methods of building AGI, stating, "We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes." He elaborated that the introduction of superintelligent tools could dramatically accelerate scientific discovery and innovation, ultimately enhancing human capabilities and increasing global prosperity. "We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity." -Sam Altman He expressed confidence in achieving superintelligence in the coming years, emphasizing the need for careful actions while maximizing broad benefit and empowerment. Rumors suggest that OpenAI is working on its next model, GPT-5, codenamed project Orion. The company is also expected to release the o3 series of AI models, which focus on advanced reasoning, later this year. Altman acknowledged that the o3 model has performed exceptionally well on benchmarks, scoring almost 90% on the ARC-AGI benchmark, surpassing human performance. Is OpenAI's promised copyright protection stalled for good? OpenAI and Microsoft have collaborated to define AGI, with OpenAI stating that it can only achieve AGI by creating a system capable of generating $100 billion in profits. It was noted that superintelligence might seem speculative, but Altman is confident its significance will soon be apparent. Furthermore, Altman remarked on the challenges of AI safety, stating that the transition to a landscape with superintelligence is not guaranteed and that OpenAI lacks definitive solutions for controlling a potentially superintelligent AI. He recognized that, as AI evolves, the organization must continue to focus on safety and alignment research. Despite some internal restructuring and staff departures related to safety concerns, Altman defended OpenAI's track record on safety.
[10]
Sam Altman says "we are now confident we know how to build AGI"
On Sunday, OpenAI CEO Sam Altman offered two eye-catching predictions about the near-future of artificial intelligence. In a post titled "Reflections" on his personal blog, Altman wrote, "We are now confident we know how to build AGI as we have traditionally understood it." He added, "We believe that, in 2025, we may see the first AI agents 'join the workforce' and materially change the output of companies." Both statements are notable coming from Altman, who has served as the leader of OpenAI during the rise of mainstream generative AI products such as ChatGPT. AI agents are the latest marketing trend in AI, allowing AI models to take action on a user's behalf. However, critics of the company and Altman immediately took aim at the statements on social media. "We are now confident that we can spin bullshit at unprecedented levels, and get away with it," wrote frequent OpenAI critic Gary Marcus in response to Altman's post. "So we now aspire to aim beyond that, to hype in purest sense of that word. We love our products, but we are here for the glorious next rounds of funding. With infinite funding, we can control the universe." AGI, short for "artificial general intelligence," is a nebulous term that OpenAI typically defines as "highly autonomous systems that outperform humans at most economically valuable work." Elsewhere in the field, AGI typically means an adaptable AI model that can generalize (apply existing knowledge to novel situations) beyond specific examples found in its training data, similar to how some humans can do almost any kind of work after having been shown few examples of how to do a task. According to a longstanding investment rule at OpenAI, the rights over developed AGI technology are excluded from its IP investment contracts with companies such as Microsoft. In a recently revealed financial agreement between the two companies, the firms clarified that "AGI" will have been achieved at OpenAI when one of its AI models generates at least $100 billion in profits. Tech companies don't say this out loud very often, but AGI would be useful for them because it could replace many human employees with software, automating information jobs and reducing labor costs while also boosting productivity. The potential societal downsides of this could be considerable, and those implications extend far beyond the scope of this article. But the potential economic shock of inventing artificial knowledge workers has not escaped Altman, who has forecast the need for universal basic income as a potential antidote for what he sees coming.
[11]
AI Agents Set to Join the Workforce by 2025, says OpenAI's Sam Altman
"We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word." OpenAI chief Sam Altman is confident in the development of artificial general intelligence (AGI) and predicts that AI agents could enter the workforce by 2025. "We believe that, in 2025, we may see the first AI agents join the workforce and materially change the output of companies," Altman wrote in a recent blog post. He further expressed confidence that OpenAI has mastered the methods necessary to build Artificial General Intelligence (AGI) as traditionally understood, emphasizing the potential of these technologies to enhance human capabilities. "We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes," he said. Looking beyond AGI, Altman revealed that OpenAI is shifting its focus toward developing superintelligence, which he believes could revolutionise scientific discovery and innovation. He described superintelligent tools as capable of performing tasks far beyond human capabilities, potentially leading to unprecedented levels of abundance and prosperity. "With superintelligence, we can do anything else," he said, suggesting that these advancements could accelerate progress in fields such as medicine, climate science, and technology. Altman acknowledged that the concept of superintelligence may sound speculative but expressed confidence that its significance will become apparent in the coming years. "We've been there before and we're OK with being there again," Altman said, referencing the company's history of pursuing ambitious goals. When OpenAI released its most powerful o3 series of models, speculations arose about whether the model achieved AGI, given its performance on benchmarks. The o3 model scored almost 90% on the ARC-AGI benchmark, exceeding human performance. Meanwhile, Microsoft and OpenAI have agreed on a new, specific definition of AGI, or artificial general intelligence. According to a report, OpenAI can only achieve AGI when it has built a system that can generate $100 billion in profits. Interestingly, a few days ago, Google's senior product manager Logan Kilpatrick, posted on X, "Straight shot to ASI is looking more and more probable by the month... this is what Ilya saw." Kilpatrick highlighted the approach taken by Ilya Sutskever, co-founder of OpenAI and the Superintelligence (SSI). "Ilya founded SSI with the plan to do a straight shot to Artificial Super Intelligence," Kilpatrick said. "No intermediate products, no intermediate model releases."
[12]
OpenAI is beginning to turn its attention to 'superintelligence' | TechCrunch
In a post on his personal blog, OpenAI CEO Sam Altman said that he believes OpenAI "know[s] how to build [artificial general intelligence]" as it has traditionally understood it -- and is beginning to turn its aim to "superintelligence." "We love our current products, but we are here for the glorious future," Altman wrote in the post, which was published late Sunday evening. "Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity." AGI, or artificial general intelligence, is a nebulous term. But OpenAI has its own definition: "highly autonomous systems that outperform humans at most economically valuable work." OpenAI and Microsoft, the startup's close collaborator and investor, also have a definition of AGI: AI systems that can generate at least $100 billion in profits. (When OpenAI achieves this, Microsoft will lose access to its technology, per an agreement between the two companies.) So which definition might Altman be referring to? He doesn't say explicitly. But it seems likely he means the former. In the post, Altman wrote he thinks AI agents -- AI models that can perform certain tasks autonomously -- may "join the workforce" and "materially change the output of companies." "We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes," wrote Altman. That's possible. But it's also true that today's AI technology has significant technical limitations. It hallucinates. It makes mistakes obvious to any human. Altman seems confident this can be overcome -- and rather quickly -- but if there's anything we've learned about AI over the past few years, it's that timelines can be unpredictable.
[13]
Sam Altman Thinks AI Agents Are Coming for Your Job This Year
The next few years are likely to be transformative for AI, and, as a result, transformative for the job market we're all trying to navigate. However, according to OpenAI CEO Sam Altman, things are moving at an incredible pace, meaning that the first AI agents could join the workforce in 2025. And that could prove disastrous for some. OpenAI Is Focusing on Superintelligence Over ChatGPT OpenAI has been leading the charge for generative AI over the past few years. With ChatGPT being the name that's broken through into mainstream consciousness, in the same way that Google became synonymous with search. However, OpenAI is keen to move beyond its current products and focus on "superintelligence" instead. Sam Altman, the CEO of OpenAI, spelled out his personal vision for where the company is going next in a blog post simply titled "Reflections". In it, he reflects on the past few years before looking forward to what comes next. The big takeaway is that Altman believes that OpenAI now knows "how to build AGI" (artificial general intelligence). And the company believes that "in 2025, we may see the first AI agents join the workforce and materially change the output of companies." He goes on to say: We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity. This all ties into OpenAI's plans to become a for-profit company, with its nonprofit division pushed to one side. The company has plans to look way beyond ChatGPT and start building artificial general intelligence. And OpenAI clearly considers that unfeasible as a nonprofit. Everything Changes When AI Agents Enter the Workforce Artificial intelligence has already disrupted some job markets, including my own. While many people thought that manual tasks such as driving would be the first to be affected, the creative arts have been adversely affected by the rise of generative AI. Now, as predicted by Altman and his cohorts at OpenAI, artificial intelligence is likely to disrupt even more people's jobs. And possibly as soon as this year. Which will be good news for some, and bad news for others. Ultimately, artificial intelligence could usher in a better future for humanity. With AI doing all the jobs humans don't want to do, giving us greater freedoms than ever before. However, not only is that an optimistic viewpoint, getting from here to there means traveling a very bumpy road with lots of casualties along the way.
[14]
OpenAI is ready to focus on 'superintelligence', boss Sam Altman says | BreakingNews.ie
OpenAI is ready to start focusing on "superintelligence" and is already capable of building artificial general intelligence (AGI) - a level of AI which can outperform humans at most work, boss Sam Altman has said. Writing on his personal blog, Mr Altman said OpenAI loves its current products, the best known of which is ChatGPT, but added "we are here for the glorious future". He said the company was "confident" it now knew how to build AGI - what is seen by many as the next step in the evolution of AI, where the technology is able to autonomously outperform humans in most work. Mr Altman said he believes that "in 2025, we may see the first AI agents 'join the workforce and materially change the output of companies'". Critics of AI have raised concerns about the technology's potential impact on the job market, and the possibility that it could replace human workers - something many industry figures have argued is not the aim of AI or tech firms, who say they are aiming to augment and aid human workers, not replace them. The OpenAI boss said the company was aware of these concerns, and remained committed to safety when it came to the creation and rollout of AI tools. "We are proud of our track-record on research and deployment so far, and are committed to continuing to advance our thinking on safety and benefits sharing," he said. "We continue to believe that the best way to make an AI system safe is by iteratively and gradually releasing it into the world, giving society time to adapt and co-evolve with the technology, learning from experience, and continuing to make the technology safer. "We believe in the importance of being world leaders on safety and alignment research, and in guiding that research with feedback from real world applications. "We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes." He added that the AI start-up was "beginning to turn our aim beyond that (AGI)" and was looking to the level beyond that - the idea of superintelligence capable of taking on and completing even more complex tasks. Mr Altman said that while superintelligent tools sound like "science fiction right now, the rate of technological advancement meant that "in the next few years, everyone will see what we see, and that the need to act with great care, while still maximising broad benefit and empowerment, is so important". "With superintelligence, we can do anything else," he said. "Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity," he said.
[15]
OpenAI thinks it knows how to build artificial general intelligence
We may still be mostly waiting for a smarter Siri, but OpenAI CEO Sam Altman thinks the company already knows how to create the holy grail of AI: artificial general intelligence (AGI). AGI is the term given to an AI system which can match or exceed human cognitive capabilities across a wide range of fields - in other words, AI which is at least as smart as we are ... Altman made the claim in a post on his personal blog. We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents "join the workforce" and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes. We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity. Knowing how is not the same as being able to, of course, and Altman doesn't say when this will be achieved - only that we should expect unspecified big breakthroughs within "the next few years." This sounds like science fiction right now, and somewhat crazy to even talk about it. That's alright -- we've been there before and we're OK with being there again. We're pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important. The post looks back as well as forward, with Altman describing his surprise firing. A little over a year ago, on one particular Friday, the main thing that had gone wrong that day was that I got fired by surprise on a video call, and then right after we hung up the board published a blog post about it. I was in a hotel room in Las Vegas. It felt, to a degree that is almost impossible to explain, like a dream gone wrong. Getting fired in public with no warning kicked off a really crazy few hours, and a pretty crazy few days. The "fog of war" was the strangest part. None of us were able to get satisfactory answers about what had happened, or why. The whole event was, in my opinion, a big failure of governance by well-meaning people, myself included. Looking back, I certainly wish I had done things differently, and I'd like to believe I'm a better, more thoughtful leader today than I was a year ago. While Altman may be predicting true AI, we're still (mostly) waiting for a smarter Siri. Although some Apple Intelligence features have already added to the intelligent assistant's capabilities, we're not expecting a fully-conversational Siri to launch until 2026 at the earliest.
[16]
Sam Altman Says OpenAI Has Figured Out How to Build AGI
"This sounds like science fiction right now, and somewhat crazy to even talk about it." OpenAI CEO Sam Altman has tried all kinds of rhetorical strategies to suggest that the dawn of artificial general intelligence (AGI) is nigh -- and in new missive, now he's trying a fresh pitch: that while OpenAI hasn't quite created AGI quite yet, it knows the roadmap to get there. "We are now confident we know how to build AGI as we have traditionally understood it," Altman wrote in a personal blog post. There's a lot to unpack about what a "traditional" understanding of AGI might entail. In the past, OpenAI has described it as a "system that outperforms humans at most economically valuable work" and Altman characterized it as a "magic intelligence in the sky," though an exact definition remains shrouded in controversy. One thing's for sure: this latest appeal dovetails perfectly with OpenAI and Altman's current business proposition: that they can pull off economically compelling AI, if only they can obtain more and more financial and computing resources to build the systems to support it. To that end, Altman wrote in his new post that AI "agents" may begin to "join the workforce" this year. In doing so, the CEO wrote, these seemingly AIs will "materially change the output of companies" -- which would also mean, of course, that they'd replace human workers, with all the baggage that entails. In reality, whether OpenAI is really on the cusp of finally birthing AGI -- which Altman has been teasing for ages -- is a matter of serious contention among experts. Sure, maybe it can fund enough data centers to scale its way to true human-level intelligence, but it's also perfectly possible that it'll hit a wall, or that its gains are already leveling off. But for all Altman's bravado, his latest entreaty sounds a lot like OpenAI's familiar and steady drip of AGI investment pitches. "This sounds like science fiction right now, and somewhat crazy to even talk about it," the CEO contended. "That's alright -- we've been there before and we're OK with being there again."
[17]
OpenAI CEO Says the Company Is Now Focusing on Superintelligence
He said superintelligent tools could accelerate scientific discovery OpenAI CEO Sam Altman claimed that the company now has a fundamental understanding of how to build artificial general intelligence (AGI) and is shifting its focus towards superintelligence. Altman highlighted that superintelligence could reshape the world and accelerate scientific discovery and innovation to a point beyond human capabilities. While the CEO did not share any roadmaps, he said that the world will see the first glimpse of the technology in the next few years. OpenAI has also not teased any AI models with AGI capabilities so far. In his personal blog, Altman published a New Year-focused post in which he looked back at the company's journey and shared where it is headed in the near future. The OpenAI CEO also highlighted that the company was incepted nine years ago as a non-profit research lab and did not become relevant in the tech industry until the launch of ChatGPT in November 2022. He also revealed that the AI chatbot was initially named "Chat With GPT-3.5." Looking at the future, Altman claimed that the company had gained a traditional and fundamental understanding of building AGI systems and added that the first AI agents might join the workforce in 2025. Notably, it is believed that a multi-step general-purpose AI agent would require some level of AGI to tackle complex, real-world tasks. OpenAI is yet to release its first AI agent. "We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn, massively increase abundance and prosperity," he added. After making the bold claim, the OpenAI CEO assured that despite it sounding like science fiction, the company is confident that it will reach superintelligence in the next few years. Altman also emphasised the need to "act with great care" with superintelligence while maximising broad benefit and empowerment was important. Currently, the AI firm is rumoured to be working on GPT-5, which is also said to be codenamed project Orion. Additionally, OpenAI has teased the advanced reasoning-focused o3 series of AI models, which is expected to be released later this year.
[18]
Sam Altman expects first AI workers this year, OpenAI closer to AGI
OpenAI CEO Sam Altman says the first artificial intelligence agents could enter the workforce this year as his company inches closer to developing humanlike artificial general intelligence (AGI). "We believe that, in 2025, we may see the first AI agents 'join the workforce' and materially change the output of companies," said Altman in a blog post titled "Reflections" on Jan. 6. AI agents or agentic AI refers to artificial intelligence systems that exhibit autonomous decision-making and goal-directed behavior. They can autonomously understand complex goals, make decisions, take actions with minimal human intervention, and execute multi-step reasoning processes. Nvidia CEO Jensen Huang is also confident that AI agents will become mainstream. "We're starting to see enterprise adoption of agentic AI really is the latest rage," he said at the firm's earnings call in November. Meanwhile, Altman is also confident that OpenAI can design and build artificial general intelligence, which brings AI incrementally closer to providing humanlike intelligence. "We are now confident we know how to build AGI as we have traditionally understood it," Altman said he believes that "iteratively putting great tools in the hands of people leads to great, broadly distributed outcomes" but says they're now beginning to turn its aim beyond that -- "to superintelligence in the true sense of the word." "Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity," he said. "We are finally seeing some of the massive upsides we have always hoped for from AI, and we can see how much more will come soon," he said. In November, Dario Amodei, CEO of AI firm Anthropic, which created the Claude chatbot, said that human-level AI could be here as early as 2026. Magazine: A bizarre cult is growing around AI-created memecoin 'religions': AI Eye
[19]
'Virtual employees' could join workforce as soon as this year, OpenAI boss says
Sam Altman says tools that carry out jobs anonymously, known as AI agents, could transform business output Virtual employees could join workforces this year and transform how companies work, according to the chief executive of OpenAI. The first artificial intelligence agents may start working for organisations this year, wrote Sam Altman, as AI firms push for uses that generate returns on substantial investment in the technology. Microsoft, the biggest backer of the company behind ChatGPT, has already announced the introduction of AI agents - tools that can carry out tasks autonomously - with blue-chip consulting firm McKinsey among the early adopters. "We believe that, in 2025, we may see the first AI agents 'join the workforce' and materially change the output of companies," wrote Altman in a blogpost published on Monday. OpenAI is reportedly planning to launch an AI agent codenamed "Operator" this month, after Microsoft announced its Copilot Studio product and rival Anthropic launched the Claude 3.5 Sonnet AI model, which can carry out tasks on the computer such as moving a mouse cursor and typing text. McKinsey, for instance, is building an agent to process new client inquiries by carrying out tasks such as scheduling follow-up meetings. The consulting firm has predicted that by 2030, activities accounting for up to 30% of hours worked across the US economy could be automated. Bloomberg reported that Operator will use a computer to take actions on a user's behalf, such as writing code or booking travel. Last year, Microsoft's head of AI, Mustafa Suleyman, indicated the company is moving towards agents that can make purchasing decisions, saying he had seen "stunning demos" where the agent carries out transactions independently, although there have also been "car crash moments" in development. However, an agent with these capabilities will emerge "in quarters, not years", Suleyman said. Before making the agent prediction, Altman also wrote in his blog that OpenAI knows how to build artificial general intelligence (AGI), a theoretical term that he has referred to in the past as "AI systems that are generally smarter than humans". "We are now confident we know how to build AGI as we have traditionally understood it," he wrote, adding that OpenAI was now turning its ambitions towards "superintelligence". "We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else," he wrote. "Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity." Altman also participated in a Q&A with Bloomberg published this weekend in which he predicted that Elon Musk will continue his feud with OpenAI this year, but will stop short of using his relationship with Donald Trump to hurt the company. Altman said he expected the world's richest person to maintain his legal battle with OpenAI, although he played down the prospect of being challenged to a cage fight with Musk, who asked Meta's Mark Zuckerberg for a mixed martial arts bout in 2023. "I think he'll do all sorts of bad s***. I think he'll continue to sue us and drop lawsuits and make new lawsuits and whatever else," Altman told Bloomberg. "He hasn't challenged me to a cage match yet, but I don't think he was that serious about it with Zuck, either, it turned out ... he says a lot of things, starts them, undoes them, gets sued, sues, gets in fights with the government, gets investigated by the government. That's just Elon being Elon." Musk dropped an initial lawsuit against OpenAI in June last year but returned two months later with a new complaint that has been expanded to include Microsoft, OpenAI's biggest backer. The suit accuses OpenAI of pursuing profit over safety and "actively trying to eliminate competitors". Musk and Altman have a fractious history. The two co-founded OpenAI in 2015 before Musk left the company over an internal power struggle several years later. OpenAI was founded with the aim of building "safe and beneficial" AGI. Altman added that he did not expect Musk to use his influence within the incoming Trump administration to hobble competitors such as OpenAI. Musk launched a new AI business, xAI, in 2023. "Will he abuse his political power of being co-president, or whatever he calls himself now, to mess with a business competitor? I don't think he'll do that. I genuinely don't. May turn out to be proven wrong," he said.
[20]
OpenAI's Sam Altman says 'we know how to build AGI'
OpenAI CEO Sam Altman says that the company is confident that it knows "how to build AGI as we have traditionally understood it," referring to the tech industry's long-sought benchmark of artificial general intelligence. And he predicts that AI agents capable of autonomously performing certain tasks may start to "materially change the output of companies" this year. Altman made the announcement in a blog post published on Monday, where he discussed the past and future of OpenAI. The company's next goal is "superintelligence in the true sense of the word," he says. "We love our current products, but we are here for the glorious future. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity."
Share
Share
Copy Link
OpenAI CEO Sam Altman's recent statements about achieving AGI and aiming for superintelligence have ignited discussions about AI progress, timelines, and implications for the workforce and society.
OpenAI CEO Sam Altman has made bold statements about the company's progress towards artificial general intelligence (AGI) and superintelligence. In a recent blog post, Altman declared, "We are now confident we know how to build AGI as we have traditionally understood it" [1]. This assertion has sparked intense debate within the AI community and beyond.
While there is no universally accepted definition of AGI, OpenAI has historically defined it as "a highly autonomous system that outperforms humans at most economically valuable work" [2]. Superintelligence, a concept popularized by philosopher Nick Bostrom, refers to "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest" [2].
Altman suggested that AI agents could join the workforce as soon as 2025, potentially transforming company outputs [1][3]. In an interview with Bloomberg, he even predicted that AGI might be developed during the current U.S. presidential term [2]. These ambitious timelines have raised eyebrows among experts and critics alike.
Not everyone is convinced by OpenAI's claims. AI researcher Gary Marcus expressed skepticism, stating, "There is zero justification for claiming that the current technology has achieved general intelligence" [1]. Others, like Mustafa Suleyman of Microsoft AI, have cautioned against making categorical declarations due to the high uncertainty surrounding AGI development [2].
OpenAI's bold statements come against a backdrop of significant financial challenges. The company is reportedly losing billions annually and does not expect to turn a profit until 2029 [2]. Some speculate that Altman's optimistic predictions may be aimed at maintaining investor interest despite substantial operating costs [4].
Altman's vision includes AI agents joining the workforce in 2025, potentially augmenting or replacing human staff in various roles [3]. This prospect raises questions about the future of work and the potential need for universal basic income (UBI) to address economic disruptions [3].
Despite claims of nearing AGI, OpenAI is actively hiring, with 158 open positions as of January 2025 [5]. This hiring push, particularly in research and engineering, suggests that achieving and implementing AGI and superintelligence still requires significant human expertise.
The rapid advancement towards AGI and superintelligence raises important ethical and societal questions. OpenAI acknowledges the potential risks, stating that "a misaligned superintelligent AGI could cause grievous harm to the world" [2]. The company emphasizes the need for careful development and deployment of these powerful technologies.
Reactions to OpenAI's claims have been mixed. While some industry figures express excitement about the potential breakthroughs, others caution against hype and overoptimism. The debate highlights the ongoing uncertainty surrounding AGI development and its potential impacts on society [3][4].
As the AI landscape continues to evolve rapidly, the coming years will be crucial in determining whether OpenAI's ambitious predictions come to fruition and how society will adapt to increasingly powerful AI systems.
Reference
[1]
[5]
Google showcases AI advancements, including Gemini 2.0 and new hardware, while industry experts debate the future of AI progress amid data scarcity concerns.
8 Sources
Google's DeepMind takes the lead in the AI race with the launch of Veo 2, outperforming OpenAI's Sora in video generation capabilities. This development, along with other AI advancements, marks a significant shift in the competitive landscape of artificial intelligence.
4 Sources
OpenAI CEO Sam Altman's recent blog post suggests superintelligent AI could emerge within 'a few thousand days,' stirring discussions about AI's rapid advancement and potential impacts on society.
12 Sources
A comprehensive look at the AI landscape in 2024, highlighting key developments, challenges, and future trends in the rapidly evolving field.
8 Sources
A comprehensive look at the latest developments in AI, including OpenAI's Sora, Microsoft's vision for ambient intelligence, and the shift towards specialized AI tools in business.
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved