12 Sources
12 Sources
[1]
Nvidia CEO Jensen Huang says 'I think we've achieved AGI'
On a Monday episode of the Lex Fridman podcast, Nvidia CEO Jensen Huang made a hot-button statement: "I think we've achieved AGI." AGI, or artificial general intelligence, is a vaguely-defined term that has incited a lot of discussion by tech CEOs, tech workers, and the general public in recent years, as it typically denotes AI that's equal to or surpasses human intelligence. In recent months, tech leaders have tried to distance themselves from the term and create their own terminology that they view as less over-hyped, more useful, and more clearly-defined (although the new phrases they've come up with essentially mean the same thing as AGI). The term has also been the subject of key clauses in big-ticket contracts between companies like OpenAI and Microsoft, upon which a significant amount of money may hinge. Fridman, the podcast's host, defines AGI as an AI system that's able to "essentially do your job," as in start, grow, and run a successful tech company worth more than $1 billion. He then asks Huang when he believes AGI will be real -- asking if it's, say, five, 10, 15, or 20 years away -- and Huang responds, "I think it's now. I think we've achieved AGI." Fridman says, "You're gonna get a lot of people excited with that statement." Huang goes on to mention OpenClaw, the open-source AI agent platform, and its viral success. He said that people are using their individual AI agents to do all sorts of things, and that he "wouldn't be surprised if some social thing happened or somebody created a digital influencer ... or some social application that, you know, feeds your little Tamagotchi or something like that, and it become out of the blue an instant success." But Huang then seemed to slightly walk back his earlier claims, saying, "A lot of people use it for a couple of months and it kind of dies away. Now, the odds of 100,000 of those agents building Nvidia is zero percent."
[2]
Nvidia CEO Jensen Huang Says He Thinks Artificial General Intelligence Is Here
Artificial general intelligence (AGI) has become the ultimate goal for tech CEOs in recent years, and Nvidia's CEO Jensen Huang claims the industry may have already reached it. However, it all depends on your own definition of AGI. In an interview on the Lex Fridman podcast, the two discuss the path to AGI, with Fridman defining the term as a tool to "essentially do your job," specifically referring to Huang's role as a tech CEO. He's suggesting that AGI could be defined by an AI tool launching and running a successful tech brand, with a caveat that the tool would need to make a company worth more than a billion dollars for it to count. Fridman asks Huang whether he believes it would take five or perhaps 20 years to reach that level of capability. Huang responds, "I think it's now. I think we've achieved AGI." Huang points to how he believes a hypothetical situation would be possible where a modern autonomous AI could "create a web service, some interesting little app that all of a sudden a few billion people used for 50 cents, and then it went out of business again shortly after." Huang says, "I wouldn't be surprised if some social thing happened or somebody created a digital influencer, super, super cute, or some social application that, you know, feeds your little Tamagotchi or something like that, and it becomes an out of the blue an instant success. A lot of people use it for a couple of months, and it kind of dies away." "Now, the odds of 100,000 of those agents building Nvidia is zero percent." The problem with AGI is that there isn't a clear definition of when a tool would have reached artificial general intelligence, and many CEOs and other tech speakers have different views of what it could mean. Other definitions of AGI, including PCMag's own, deem it "a machine intelligence that is equal to or greater than that of a human being." Others include the caveat that it must be equal to or greater than a human being in all cognitive tasks, a condition that Fridman's definition didn't specify.
[3]
"We've achieved AGI," says Nvidia CEO, but his own examples suggest otherwise
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. Big quote: Nvidia CEO Jensen Huang declared this week that AGI - short for "artificial general intelligence" - has already arrived, before quickly softening the claim. In one podcast appearance, Huang said flatly, "I think we've achieved AGI," before conceding that today's AI systems may not yet match human capability. In a separate conversation, he struck a different note, chastising engineers who underuse AI tools and warning he would be "deeply alarmed" if they were not spending enough on the very systems he had just suggested were already intelligent. The tension between those remarks - one heralding the dawn of human-level AI, the other implying it still requires significant human guidance - captures the ambiguity surrounding how close the industry truly is to achieving AGI. The two appearances came days apart. On March 19, Huang sat down with the All-In Podcast at Nvidia's GPU Technology Conference in San Jose. Three days later, on March 22, his interview with Lex Fridman was released. In the Fridman interview, Huang said bluntly, "I think we've achieved AGI," referring to the class of systems expected to match or surpass human intelligence. The statement instantly intensified an already polarized debate about what exactly qualifies as "general" intelligence - and whether anyone, including Nvidia, can credibly say it has been reached. The exchange was prompted by Fridman's own definition of AGI as a system capable of essentially doing your job, including starting, growing, and running a successful technology company worth more than $1 billion. Asked for a timeline - within five, ten, or even twenty years - Huang didn't hesitate. "I think it's now," he said. His caveat, though, was telling: "You said a billion," he added, "and you didn't say forever" - framing AGI not as a durable human-level mind, but as a momentary commercial threshold. Fridman noted that Huang's definition could "get a lot of people excited," and indeed it did. Tech leaders and researchers have long disagreed over whether current AI systems truly demonstrate general intelligence or just mimic fragments of it. The term itself has become loaded, shaping billion-dollar contracts and strategic direction at companies such as OpenAI and Microsoft, where performance benchmarks and risk clauses hinge on whether AGI has been "achieved." Huang cited the rapid evolution of open-source AI agent platforms such as OpenClaw, which is in the process of being acquired by OpenAI, where developers use digital agents to launch social applications and creative experiments. He described a wave of entrepreneurial creativity: AI that can design influencers, automate digital communities, and perhaps "become an instant success." But he quickly tempered those remarks, acknowledging the limits of the technology. "A lot of people use it for a couple of months and it kind of dies away," he said. "The odds of 100,000 of those agents building Nvidia is zero percent." "If that $500,000 engineer did not consume at least $250,000 worth of tokens, I am going to be deeply alarmed," Jensen recently said. That mix of ambition and restraint was also clear in Huang's earlier appearance on the All-In Podcast, where the conversation turned from AGI's potential to how humans are - or aren't - leveraging it. There, he drew a sharp line between talent and tool use. "If that $500,000 engineer did not consume at least $250,000 worth of tokens, I am going to be deeply alarmed," he said. Tokens - units AI models use to process and generate language - represent both the cost and capacity of AI work. For Huang, under-spending on tokens signals under-utilization of AI itself. "This is no different than one of our chip designers saying, 'Guess what? I'm just going to use paper and pencil'" - forgoing CAD tools entirely, he said. Nvidia is reportedly trying to allocate $2 billion for token access across its engineering team, with Huang suggesting that tokens could even become a formal part of compensation packages. "They're going to make several hundred thousand dollars a year, their base pay," he said. "I'm going to give them probably half of that on top of it as tokens so that they could be amplified 10X."
[4]
'I think we've achieved AGI' -- Nvidia's CEO believes we've finally achieved artificial general intelligence
Nvidia CEO James Huang is back in the news again, and this time, he makes a bold statement about AI finally reaching a new, evolved stage Nvidia and its CEO Jensen Huang have been in the news a lot lately. Gamers haven't been too kind to the massive tech company, with negative reactions flooding the comments section underneath the Nvidia DLSS 5 reveal video. In response, CEO Jensen Huang pushed back during a press Q&A at this year's GPU Technology Conference, saying gamers are "completely wrong" about the backlash. Huang doubled down in a later interview with "Mad Money's" Jim Cramer and championed the open-source autonomous AI called OpenClaw, calling it "definitely the next ChatGPT." Huang has popped up on everyone's tech-themed timelines again to make another bold assertion during an appearance on Lex Fridman's podcast. And as expected, it speaks to an evolved stage of AI that could even do his job and run an entire multi-trillion-dollar company. Huang believes artificial general intelligence has finally arrived When asked when AGI (artificial general intelligence) might arrive, Nvidia CEO Jensen Huang didn't hesitate: "I think it's now. I think we've achieved AGI." Put simply, AGI refers to AI that can match -- or even surpass -- human intelligence across a wide range of cognitive tasks. It's not just about answering questions or generating text; it's about systems that can learn, reason and apply knowledge the way humans do. Huang suggested this shift could show up in unexpected ways, pointing to the possibility of a breakout app or digital personality. "I wouldn't be surprised if some social thing happened -- like a digital influencer or some kind of app that suddenly becomes an instant success," he said. Still, he stopped short of claiming AI is ready to fully replace human decision-making. Huang acknowledged that today's AI agents aren't capable of running a company like Nvidia on their own. "A lot of people use it for a couple of months, and it kind of dies away," he said. "Now, the odds of 100,000 of those agents building NVIDIA is zero percent." That stance marks a shift from his earlier timeline. At the 2023 New York Times DealBook Summit, Huang said AGI was still about five years away and would eventually be able to outperform humans on intelligence tests. Bottom line Huang is obviously all-in on AI. With his most recent support of OpenClaw, Nvidia's reveal of DLSS 5, and the company's overall implementation of Agentic AI, it's clear that he is looking to further modernize and evolve the trending technology. Whether or not AGI is truly here and capable of handling the lofty responsibilities that humans are usually entrusted with is not known. Seeing as how Huang walked back his major statement about AGI a bit, it seems that even he thinks we're not quite there just yet. Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
[5]
'I think we've achieved AGI' -- er Jensen... I don't think we have
Nvidia CEO Jensen Huang just said, "I think we've achieved AGI," while on a podcast. Of course, this has generated a lot of buzz as, if he's correct, it would be a major leap forward in AI capabilities. Spoiler, we haven't made AGI. While appearing on the Lex Fridman podcast, Fridman defined AGI to Jensen Huang as a tool to "essentially do your job" -- specifically, Huang's role of launching a successful company that grows to a value of over a billion dollars. In response to Fridman's question of how many years that kind of capability is away from launch, Huang said, "I think it's now. I think we've achieved AGI." He then went on to explain that an AI might not achieve Nvidia's lasting success, but it could maybe make a viral paid app that costs 50 cents and sell it to a lot of people before going out of business. He explained it could be "some social application that, you know, feeds your little Tamagotchi or something like that, and it becomes an out-of-the-blue instant success. A lot of people use it for a couple of months, and it kind of dies away." Meanwhile, the odds of AI producing an Nvidia, even 100,000 agents doing so, are "zero." The thing is, this isn't AGI -- even an AI that mimics Nvidia's success wouldn't be AGI. It would be impressive, sure, but artificial general intelligence is something much more special. No, we haven't achieved AGI Artificial General Intelligence is the holy grail of AI development. It would be a digital form of human intelligence -- that is, rather than an AI needing to be trained on each specific task being asked of it (like existing, so-called narrow AI are right now), the bot would be able to apply its existing information to new situations just like a human can. AGI would combine self-learning, common sense, contextual understanding, and the ability to think abstractly at high speed into one system. It would be a monumental leap forward for what AI is capable of, but as you might expect, it isn't something researchers have been able to crack quickly -- with some debating that we might never achieve it. Even if we do construct AGI, most researchers believe that we aren't anywhere close to that eventuality. The majority of the 475 AI researchers surveyed by the Association for the Advancement of Artificial Intelligence (76%) said that scaling up our current AI efforts is unlikely or very unlikely to result in AGI. AGI isn't just an upgraded LLM; it's a whole different AI architecture, and it requires its own research and development -- imagine trying to build an incredible airplane by making better and better cars, that's sort of what's happening with AGI and LLMs. At the same time, AGI isn't a well-defined thing -- partly because it's hard to define something which doesn't exist yet. There's a difference between AGI and an AI that can simply do lots of different things, but where the line is drawn is difficult to determine. What further muddies the water is the financial incentive companies have to deliver AGI, or to at least promise it's almost here. For example, OpenAI's deal with Microsoft gives it some incredible benefits if AGI is achieved. AGI, and making it feel close, is also how you appeal to investors. The economic potential of AGI is huge for how it could truly revolutionize every industry, and the promise/hope it's just around the corner is what could convince investors to hold onto their stake in the AI company of their choice for longer -- or risk feeling the ultimate financial FOMO -- rather than selling out. This is true too for Nvidia, which, as the pan and pick seller in this AI gold rush, wants to keep hype up. If it falls, demand for chips would drop too, and that would seriously affect Nvidia's bottom line. At the same time, many have noted that AGI isn't the be-all and end-all. Just because an AI isn't a jack-of-all-trades doesn't mean it can't be a master of one, and just like humans, having someone/something that hyperspecializes in a key area is more useful in a way than something that's okay at lots of tasks. I don't care if my surgeon is a decent horticulturist, could teach me to be a confident skier, and moonlights as a vintage car restorer -- I simply want them to be a leading expert in human biology as they cut me open. As AI stretches into medicine, law, manufacturing, and everything, a series of individual experts is more than fine, it's arguably ideal -- even if it isn't as flashy as AGI. So no, AGI isn't here yet, but AI disruption is, and it will only creep further into our lives. Today's as bad as AI will ever be. It will only get better, and it's only a matter of time before AI morphs from being an assistive tool to seriously eating up whole jobs -- with or without AGI. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[6]
NVIDIA CEO Jensen Huang's definition of AGI is telling
Artificial General Intelligence, or AGI, has spent the last year or so as the AI industry's favorite buzzword. As the sector's leading companies burn through capital at historic rates, racking up energy costs and investor expectations that grow harder to meet by the quarter, the promise of imminent human-level machine intelligence has become a useful thing to have in your back pocket. Whether we're actually close to that milestone depends almost entirely on how you define it. That definitional flexibility, it turns out, is doing a lot of work. Take, for example, Jensen Huang, the CEO of NVIDIA -- a company currently valued at roughly $4 trillion, built largely on the GPU hardware that powers the AI boom -- who recently sat down with podcaster Lex Fridman for a wide-ranging conversation covering data centers, geopolitics, and the question of whether AGI has already arrived. Huang thinks it has. The reasoning behind that claim, however, is fairly dubious. As Fridman points out, Huang has previously said the timeline for AGI depends on what defines it. At the 2023 New York Times DealBook Summit, Huang defined AGI as software capable of passing tests that approximate normal human intelligence at a reasonably competitive level. He expected AI to clear that bar within five years. For his part, Fridman offered Huang a generous definition to work with: true AGI, in Fridman's framing, would look like an AI capable of starting, growing, and running a technology company worth more than a billion dollars. He asked whether that was achievable in the next five to 20 years, given the recent proliferation of agentic AI tools like OpenClaw. Huang didn't need five to 20 years. "I think it's now. I think we've achieved AGI," he replied to Fridman. That, however, is based on a narrow interpretation of what Fridman asked. The way Huang sees it, the AI doesn't need to build anything lasting. It doesn't need to manage people, navigate a board, or sustain a business. It just needs to hit a billion dollars once. "You said a billion," Huang told Fridman, "and you didn't say forever." The through-line in both cases isn't a consistent theory of machine intelligence. It's a consistent pattern of defining the threshold in whatever way makes "yes, we're there" the easiest possible answer. His illustration of what that might look like is telling. After his initial answer, Huang lays out his thoughts, describing a scenario in which an AI creates a simple web service -- some app that goes viral, gets used by a few billion people at 50 cents a pop, and then quietly folds. He then points to the dot-com era as precedent, arguing that most of those websites were no more sophisticated than what an AI agent could generate today. Huang was also candid about the ceiling of that vision. "The odds of 100,000 of those agents building NVIDIA," he said plainly, "is zero percent." That's not a small caveat. It's the whole ballgame. What Huang is actually describing -- a viral app that monetizes briefly and dies -- is a far cry from the transformative, economy-reshaping AGI that dominates the public conversation. So, by his own admission, the kind of compound institutional intelligence required to build something like NVIDIA is nowhere in the picture yet.
[7]
Nvidia's Jensen Huang says 'We've achieved AGI.' But no one can agree on what AGI means. | Fortune
Last week, Nvidia CEO Jensen Huang made headlines when he told podcaster Lex Fridman that AGI -- artificial general intelligence -- had already been achieved. AGI has long been the ultimate goal of many artificial intelligence researchers. That's been the case even though there is no universally accepted definition of the term. It generally means AI that is as intelligent as humans, but there is a fierce debate over exactly how to define and measure "intelligence." In this case, Fridman had offered Huang a very unusual metric for AGI: Could AI start and grow a technology business to the point where it was worth $1 billion? Fridman asked if Huang thought AGI by this definition could be achieved within the next five to 20 years. Huang said he didn't think that amount of time was necessary. "I think it's now. I think we've achieved AGI," he said. He then hedged, noting the company didn't necessarily have to remain that valuable. "You said a billion," Huang told Fridman, "and you didn't say forever." Few AI researchers agree with the definition of AGI that Fridman offered Huang, which was both more specific (a company worth $1 billion), but also more narrow than most AGI definitions (which tend to refer to matching a vast range of human cognitive skills, not all of which might be needed to build a successful business.) But AI researchers also disagree with one another over what a better definition should be. The term remains stubbornly amorphous despite the fact that several leading AI companies, with collective market valuations of more than $1 trillion, say that AGI is what they are racing towards. Some computer scientists avoid using the term at all precisely because they say it is perpetually undefined and unmeasurable. Others say tech companies like using the term for completely cynical reasons -- precisely because it is ill-defined, it's easy for companies to build hype by claiming big strides towards achieving the fabled milestone. The buzz over Huang's AGI remarks only serves to highlight this quandary at the heart of the AI boom. In fact, just days before Fridman dropped his podcast, researchers at Google DeepMind -- including DeepMind cofounder Shane Legg, who first helped popularize the term AGI in the early 2000s -- published a new research paper that proposed a more scientific way to define and assess whether AI models had achieved general intelligence. The paper, "Measuring Progress Toward AGI: A Cognitive Framework," draws on decades of research in psychology, neuroscience, and cognitive science to construct what its authors call a "Cognitive Taxonomy." The taxonomy identifies 10 key cognitive faculties -- including perception, reasoning, memory, learning, attention, and social cognition -- that the researchers argue are essential for general intelligence. The framework then proposes evaluating AI systems across all 10 faculties and comparing their performance to a representative sample of human adults with at least the equivalent of a secondary education. The paper's key insight is that today's AI models have a "jagged" cognitive profile: They may exceed most humans in some areas, like mathematics or factual recall, while dramatically trailing even average people in others, like learning from experience, maintaining long-term memories, or understanding social situations. An AI model would need to at least match median human performance across all 10 areas to be considered AGI, the Google DeepMind researchers suggest. The researchers also announced a contest with a $200,000 prize pool on the popular machine learning competition site Kaggle for outside researchers to help build evaluations for the five cognitive faculties where existing benchmark tests are weakest. The DeepMind paper is only the latest in a string of recent attempts to put the measurement of intelligence on more rigorous footing. Last year, a team led by Dan Hendrycks at the Center for AI Safety, and that included deep learning pioneer Yoshua Bengio, published their own AGI framework and metrics. That paper also divided general intelligence into 10 separate cognitive domains, drawing on a framework for human intelligence developed by three psychologists -- Raymond Cattell, John Horn, and John Carroll -- that is the most empirically validated model of human cognition. It produced "AGI Scores" for existing AI models; the most capable system tested, OpenAI's GPT-5, which was released in August 2025, scored just 57%, falling far short of matching a well-educated adult across all the cognitive dimensions. One of the most ambitious practical attempts to highlight what today's AI systems still cannot do is the ARC-AGI benchmark, created by well-known machine learning researcher François Chollet. Chollet's core argument is that intelligence should be measured not by what a system already knows, but by how efficiently it can learn new skills. The ARC-AGI benchmark consists of visual puzzle tasks involving grids of colored cells. Each task shows a few examples of an input grid being transformed into an output grid according to a hidden rule, and the test-taker must figure out the rule and apply it to a new input. For a human, grasping the pattern typically takes seconds. For frontier AI models, these puzzles remain surprisingly difficult, because they require the kind of flexible, abstract reasoning -- spotting symmetries, understanding spatial relationships, inferring rules from a handful of examples -- that current systems struggle with. This month, Chollet and his collaborators launched ARC-AGI-3, the latest and most demanding version of the benchmark. Unlike earlier editions, which presented static puzzles, ARC-AGI-3 is interactive: AI agents must explore novel environments, acquire goals on the fly, build adaptable world models, and learn continuously over multiple steps -- abilities that come naturally to humans but that remain at the frontier of AI research. Taken together, these new benchmarks represent a growing effort within the AI research community to replace vague definitions about AGI with something closer to scientific measurement. But as these researchers are the first to admit, the difficulty of defining intelligence is as old as the study of thinking itself -- and has plagued artificial intelligence as a field from its very earliest days. In 1950, before the term "artificial intelligence" had even been coined and when mathematicians and electrical engineers were just starting to build the first modern computers, the famed British mathematician and computer pioneer Alan Turing wrestled with the fact that it was extremely difficult to formulate a definition of intelligence. Rather than attempting one, Turing proposed an assessment he called "the Imitation Game," which later became better known as the Turing Test. It stipulated that a machine should be considered intelligent when it can hold a general conversation with a person, via text, and a second human judge, reading the exchange, cannot reliably determine which participant is the machine and which the human. It was, in essence, an "I'll know it when I see it" approach to intelligence. But the Turing Test soon proved problematic too. Eliza, a chatbot developed at MIT in the mid-1960s, was designed to mimic a psychotherapist. Most of its responses followed hard-coded logical rules; Eliza often answered users with questions such as "Why do you think that is?" or "Tell me more" to cover up its weak language understanding. And yet Eliza fooled some people into believing it understood them. Eliza came close to passing the Turing Test even though on almost every other measure it came nowhere close to human cognitive abilities. And, in fact, a more sophisticated chatbot called "Eugene Goostman" officially passed a live Turing Test competition in 2014, again without touching most human cognitive skills. Today's large language models converse far more fluently than Eliza ever could, they still cannot match humans across the full spectrum of cognitive abilities -- they hallucinate facts, struggle with long-horizon planning, and cannot learn from experience the way a person does. Compared to the Turing Test, the term "artificial general intelligence" is a relatively recent one. It was first coined in 1997 by Mark Gubrud, then a graduate student at the University of Maryland, who used the neologism in a 1997 paper he presented at a conference on nanotechnology. He used the phrase "advanced artificial general intelligence" to describe AI systems that could "rival or surpass the human brain in complexity and speed, that can acquire, manipulate, and reason with general knowledge, and that are usable in essentially any phase of operations where a human intelligence would otherwise be needed." But the paper quickly vanished in obscurity. Then, in the early 2000s, Legg -- who would go on to cofound DeepMind -- independently coined the same term. He was collaborating with computer scientists Ben Goertzel, Cassio Pennachin, and others on a book about potential ways to create machine learning systems that would be able to address a wide range of problems and tasks. They wanted a term that would distinguish the ambition of these systems from the narrow machine learning algorithms then in vogue, which, once trained, could only tackle a single, narrow task. Goertzel considered calling this more general AI "real AI" or "strong AI," but Legg suggested "artificial general intelligence" instead, unaware of Gubrud's earlier usage. He also suggested the term be abbreviated as AGI. This time, AGI took off. In Goertzel's book he defined AGI as "AI systems that possess a reasonable degree of self-understanding and autonomous self-control, and have the ability to solve a variety of complex problems in a variety of contexts, and to learn to solve new problems that they didn't know about at their time of creation." The definition was useful for separating work on general AI systems from narrow machine learning ones, but it too contained a fair an unhelpful amount of ambiguity: What did "reasonable degree" mean? Which complex problems in which contexts counted towards the standard? Legg would later compound this ambiguity by offering a more casual definition of AGI that was in some ways narrower (it didn't talk about self-understanding, for instance) but equally vague. For instance, he told The Atlantic's Nick Thompson last year, "I define an AGI to be an artificial agent that can do the kinds of cognitive things that people can typically do. I see this as the natural minimum bar." But which things? And which people? Questions like this have continued to swirl around AGI. Does the term mean software that matches the cognitive abilities of an average human? Or the abilities of the humans with the highest IQs? Or the best expert in each individual domain of knowledge? The Hendrycks and Bengio research paper, for instance, defines AGI as matching or exceeding "the cognitive versatility and proficiency of a well-educated adult." The DeepMind paper proposes measuring against a representative sample of adults. Others have used less precise formulations. Adding to the confusion, AGI is often conflated in public discussion with a concept AI researchers call "artificial superintelligence," or ASI -- an AI that would be smarter than all humans combined. Most AI researchers consider AGI and ASI to be separate milestones, and very different in degree of sophistication, but in the popular imagination the two frequently blur together. If the academic debate over defining AGI has been long and nuanced, the corporate world has introduced definitions that are, to put it charitably, idiosyncratic. DeepMind became the first company to make the pursuit of "artificial general intelligence" a business goal. Legg put the phrase on the front page of the company's first business plan when he, Demis Hassabis, and Mustafa Suleyman cofounded the company in 2010. Five years later, OpenAI also made building AGI its explicit mission. Its original 2015 founding principles said that the new lab -- at the time a non-profit -- was dedicated to ensuring "that artificial general intelligence benefits all of humanity." Three years later, when the lab first set up a for-profit arm, it published a charter that defined AGI "as highly autonomous systems that outperform humans at most economically valuable work." Now, for the first time, AGI was being measured by financial metrics, not mere cognitive ones. And, as it turned out, OpenAI would soon secretly set a highly specific financial threshold for AGI. When Microsoft first invested $1 billion into OpenAI's for-profit arm in 2019, the tech giant's agreement with the AI startup made it OpenAI's preferred commercialization partner for any AI model the lab developed up to, but crucially not including, AGI. At the time, it was reported that the decision of when AGI had been achieved would be at the discretion of OpenAI's non-profit board. But, crucially, according to reporting by tech publication The Information in 2024, when Microsoft agreed to invest a further $10 billion into OpenAI in 2023, its contract with OpenAI contained a clause that defined AGI as a technology that could generate at least $100 billion in profits. OpenAI is nowhere near that mark. The company has reportedly told investors it made $13 billion in revenues last year, but still managed to burn through $8 billion in cash. It does not expect to break even until 2030. Despite being far short of the financial threshold for AGI in its contract with Microsoft, OpenAI CEO Sam Altman has often made statements that suggest OpenAI is close to achieving the AI milestone as measured by other benchmarks. In a post to his personal blog in January 2025 titled "Reflections," Altman wrote that OpenAI was "now confident we know how to build AGI as we have traditionally understood it" and that the company was beginning to turn its aim towards superintelligence. In a subsequent essay titled "Three Observations," he wrote that systems pointing toward AGI were "coming into view." Yet, at other times, Altman has seemed to acknowledge AGI's weakness as a concept. Around the same time as his "Reflections" blog post, Altman told a Bloomberg News interviewer that AGI "has become a very sloppy term." Microsoft has also chosen to ignore the financial definition of AGI it struck with OpenAI when it suited the company's marketing purposes. In March 2023, a team of Microsoft researchers published a 154-page paper about GPT-4 provocatively titled "Sparks of Artificial General Intelligence," arguing the model could "reasonably be viewed as an early (yet still incomplete) version" of AGI. The paper was widely criticized for hyping the abilities of GPT-4 for commercial purposes. Even Altman distanced himself, calling GPT-4 "still flawed, still limited."The new research and benchmarks from Google DeepMind and the Hendrycks-Bengio team makes some progress towards establishing a yardstick for AGI, one rooted in decades of study of human intelligence. And what's clear is that today's best AI models still don't measure up to breadth and depth of human cognitive abilities. Huang, the Nvidia CEO, knows this, just as he was no doubt fully aware of the social media frenzy and headlines he would generate by saying AGI had been achieved. We know Huang knows this because later in the same podcast in which he said "AGI is achieved" he also said that the popular OpenClaw AI agents, which can be powered by any of the top AI models from companies such as Anthropic and OpenAI, could never replicate Nvidia. "Now, the odds of 100,000 of those agents building Nvidia is zero percent," he said. Huang is not just Nvidia's CEO. He is also the company's founder and the person who has run the company for 33 years, piloting it past near-bankruptcy at one point, to see it now worth more than $4 trillion, making it one of the most valuable companies on the planet. In many ways, Huang is a singular genius. But he's also a very human one. So maybe we need a new standard, not AGI but AJI -- artificial Jensen intelligence. When AI reaches that level, the AI boosters on social media who breathlessly amplified Huang's AGI claim will really have something to get excited about.
[8]
Is AGI really here as Nvidia's Jensen Huang claims?
AGI is vaguely defined as computer systems capable of human-level intellect. Nvidia founder and CEO Jensen Huang sat down with Lex Fridman for a podcast yesterday (23 March), where he claimed that Artificial Generative Intelligence (AGI) has been achieved. When asked by Fridman for a timeline for reaching AGI, Haung responded: "Now. I think we've achieved AGI". He further claimed that it is "possible" that a company could be run by the advanced AI system. Explaining his rationale behind the statement, Huang argued that many users in China already deploy their personal agents created by OpenClaw (called Claws) to "go out and look for jobs" and "do work, make money". OpenClaw is backed by OpenAI which recently hired its founder Peter Steinberger to help build the "next generation" personal AI agents. However, the Nvidia chief stopped short of saying that such agents could build his company. "A lot of people use it for a couple of months and it kind of dies away. Now, the odds of 100,000 of those agents building Nvidia is zero percent." The exact definition of AGI is difficult to pin down, adding to the vagueness around claims of reaching it. According to Google, AGI refers to the hypothetical intelligence of a machine that allows it to "understand" or "learn" intellectual tasks that humans can. IBM calls it the "abstract goal of AI development", where human intelligence can be replicated by machines or software. Plus, there's many claims of what could happen once AGI is achieved, from a dramatic disruption of labour and even bigger concentration of wealth, to newer advances in research and technology. And admittedly, the multitrillion-dollar Nvidia has huge stakes in the AI race, being a major provider of AI chips and infrastructure to companies worldwide. In January, Huang told the audience at Davos that AI is becoming the foundation of the "largest infrastructure buildout in human history". OpenAI CEO Sam Altman told Forbes last month that they have "basically built AGI, or very close to it". His company, he said, is "110pc" focused its core mission of AGI. Meanwhile, Microsoft CEO Satya Nadella contradicted Altman's claim and said that we are nowhere close to AGI. And Anthropic CEO Dario Amodei says the number is more likely between one and three years. For now, OpenAI's AGI claim - if made official - will need to be verified by an independent expert panel first. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[9]
NVIDIA CEO Jensen Huang says AGI has been now been 'achieved'
TL;DR: NVIDIA CEO Jensen Huang believes Artificial General Intelligence (AGI) has already been achieved, depending on its definition. He suggests AI could potentially run a billion-dollar company temporarily by creating popular services, but doubts multiple AI agents could build a company like NVIDIA. NVIDIA CEO Jensen Huang recently sat down for an interview where he asked about his estimated timeline on a company achieving Artificial General Intelligence (AGI), to which he answered, "I think we've achieved AGI," albeit with some caveats. For quite some time, Huang has been an advocate for the achievement of AGI being determined by the definition of AGI, and that definition is entirely dependent on how any given person would describe what an AGI is. In the instance of the interview with Huang, Lex Fridman proposes that an AGI would be capable of replacing Huang at the helm of NVIDIA, or be able to achieve what Huang has achieved with the company. For example, Fridman says this particular AGI would be capable of starting, growing, and running a successful technology company, and that company needs to be worth more than $1 billion. Fridman asked Huang how long he thinks an artificial system such as this is away. In response, Huang said sharply, "I think it's now. I think we have achieved AGI". Fridman followed up by asking if it's possible for a company to run on an AI system. Huang replied, "Possible. And the reason for that is this. You $1 billion, and you didn't say forever." Huang goes on to state that it's entirely possible an AI system could create a web service, or app, that attracts millions of users for a short period of time. The NVIDIA CEO goes on to say that the likelihood of 100 agents building NVIDIA into a company is zero.
[10]
Nvidia CEO Jensen Huang: We have achieved AGI - The Economic Times
Nvidia CEO Jensen Huang said AGI is already achieved, suggesting AI could create and grow a billion-dollar tech company. He argued AI will not remove jobs but change tasks and tools. Using radiology as an example, he said AI improves outcomes, increases demand, and more experts and engineers will still be needed.Nvidia CEO Jensen Huang said on the Lex Fridman podcast yesterday that the company has already achieved artificial general intelligence (AGI), a milestone many tech veterans deem to be a few years away. In a conversation with Huang in his latest episode, Fridman defined AGI as AI that could on its own start and grow a tech company to a $1 billion valuation, and asked how far we are from such a feat. Huang replied: "I think it's now. I think we've achieved AGI." Explaining further, he said, "For example... it is not out of the question that a Claw (OpenClaw) was able to create a web service, some interesting little app that all of a sudden a few billion people used for 50 cents. And then it went out of business shortly after." Huang compared this to simple internet-era apps that went viral and made money fast. Tools like OpenClaw could do the same today, he said. Is AI coming for our jobs? Addressing the fear around AI eliminating jobs, the Nvidia CEO struck an optimistic note, saying that the jobs will not go anywhere, but AI will transform how we use tools to fulfil those roles. "I just want to remind them (people) that the purpose of your job and the tasks and tools you use to do your job are related..." Taking radiology as an example, he said AI researchers had predicted the field would vanish once computer vision achieved "superhuman" levels around 2019-2020. Today, AI drives every radiology platform, yet the number of radiologists have grown. "The purpose is to diagnose disease and help patients," Huang said. Faster scans mean more patients, better outcomes, and booming hospital revenues, which demands more experts. Huang added, "The number of software engineers at Nvidia is gonna grow, not decline." Engineers solve problems, innovate, and collaborate, all core duties beyond just coding, which AI does better.
[11]
Nvidia's CEO says, that "an artificial general intelligence", AGI, is here
Nvidia's CEO Jensen Huang has said in a new interview, that artificial general intelligence (AGI) has already been achieved. The comment was made on a podcast with Lex Fridman. AGI, or "artificial general intelligence" is usually used to refer to an artificial intelligence that is capable of performing a wide range of tasks at the level of (or better) than humans. But there is no established definition for the term, which has led to differences in interpretation between both researchers and technology companies. Fridman described AGI as "a system that could practically perform the work of a human comprehensively, for example, founding and running a billion-dollar technology company". Then Jensen Huang was asked about the timeline for achieving such a thing, and he answered, that this kind of AGI has already been done, as reported by The Verge and Muropaketti. Huang specifically referred to new AI agents and open-source platforms such as OpenClaw, which allow individual users to build autonomous software agents. These are used for a variety of purposes, and some projects have quickly gained great popularity. So according to Huang, it is possible for a single application or digital service to quickly become a widely used phenomenon. But Huang continued, that most such experiments will be short-lived, because it is unlikely that a large number of individual AI agents would be able to build a global technology company, like Nvidia for example.
[12]
Nvidia CEO Jensen Huang Drops A Bombshell -- 'I Think We've Achieved AGI' - NVIDIA (NASDAQ:NVDA), Tesla (NASDAQ:TSLA)
NVIDIA (NASDAQ:NVDA) CEO Jensen Huang said he believes Artificial General Intelligence (AGI) has already been achieved on an episode of the Lex Fridman podcast, released on Monday Fridman proposed defining AGI as an AI capable of building and running a billion-dollar tech company, asking whether that milestone could arrive within 5-20 years. Huang was direct: "I think it's now. I think we've achieved AGI." Huang's threshold doesn't require permanence. In his view, an AI that builds a viral app, earns a billion dollars, and subsequently shuts down still qualifies -- drawing a comparison to dot-com era companies that were short-lived but undeniably real. "I couldn't have predicted any of those companies at the time either," he said, when asked what that breakthrough might look like. Huang's Previous Outlook The top boss of the chip designer said AGI could eventually design chips like those made by Nvidia, noting that even today's H100 chips rely heavily on AI. However, he added that AI has not yet surpassed complex human intelligence. Debate Over AGI Timeline Ex-Tesla AI Chief Andrej Karpathy, however, believes AGI is still a decade away, stating that the industry is making too big a jump and is trying to pretend like this is amazing, and it's not. "The models are amazing. They still need a lot of work," he said. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by a Benzinga editor. Image via Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
Share
Share
Copy Link
Nvidia CEO Jensen Huang declared on the Lex Fridman podcast that artificial general intelligence has arrived, sparking intense debate across the tech industry. But within the same conversation, Huang conceded that AI agents have a zero percent chance of building a company like Nvidia, revealing the tension between AGI hype and current AI capabilities.
On a recent episode of the Lex Fridman podcast, Nvidia CEO Jensen Huang made a striking claim that sent ripples through the tech industry: "I think we've achieved AGI."
1
The statement addresses artificial general intelligence, a term that has dominated conversations among tech CEOs and researchers as it typically denotes AI systems equal to or surpassing human intelligence across a wide range of cognitive tasks.2
Source: TechSpot
During the March 22 interview, Fridman defined the definition of AGI as a system capable of "essentially doing your job," specifically referring to starting, growing, and running a successful tech company worth more than $1 billion.
1
When asked whether this capability was five, ten, or twenty years away, Huang responded without hesitation: "I think it's now. I think we've achieved AGI."2
This marks a notable shift from his 2023 New York Times DealBook Summit appearance, where he estimated AGI was still about five years away.4
Yet within the same conversation, Huang significantly softened his claim. He pointed to OpenClaw, the open-source AI agent platform currently being acquired by OpenAI, as an example of what modern AI agents can accomplish.
3
Huang suggested that autonomous AI could potentially "create a web service, some interesting little app" or launch a digital influencer that becomes an instant success, perhaps generating revenue from billions of users at 50 cents each before quickly fading away.2

Source: Silicon Republic
But the caveat was telling. "A lot of people use it for a couple of months and it kind of dies away," Huang acknowledged. "Now, the odds of 100,000 of those agents building Nvidia is zero percent."
1
This admission directly contradicts his earlier assertion, framing AGI not as a durable system with human-level intelligence, but rather as a momentary commercial threshold.3
The tension in Huang's remarks highlights a fundamental challenge: there isn't a clear definition of when a tool would have reached artificial general intelligence.
2
While some definitions, including industry standards, describe AGI as "a machine intelligence that is equal to or greater than that of a human being" in all cognitive tasks, Fridman's definition didn't specify this comprehensive requirement.2
True AGI would require self-learning capabilities, common sense, contextual understanding, and the ability to think abstractly at high speed—all combined into one system.
5
It represents a completely different AI architecture from current large language models, requiring dedicated research and development rather than simply scaling existing systems. In fact, 76% of 475 AI researchers surveyed by the Association for the Advancement of Artificial Intelligence said that scaling up current AI efforts is unlikely or very unlikely to result in AGI.5
Related Stories
The term AGI has become loaded with financial implications, shaping billion-dollar contracts and strategic direction at companies like OpenAI and Microsoft, where performance benchmarks and risk clauses hinge on whether AGI has been "achieved."
3
For Nvidia, which serves as the infrastructure provider in the AI boom, maintaining momentum around AGI development directly impacts chip demand and the company's bottom line.5
Just three days before the Lex Fridman podcast, Huang appeared on the All-In Podcast at Nvidia's GPU Technology Conference in San Jose, where he struck a notably different tone. There, he expressed concern about engineers underutilizing AI tools, stating: "If that $500,000 engineer did not consume at least $250,000 worth of tokens, I am going to be deeply alarmed."

Source: Mashable
Nvidia is reportedly allocating $2 billion for token access across its engineering team, with Huang suggesting tokens could become part of formal compensation packages to amplify engineer productivity by 10X.
This emphasis on human guidance and oversight contradicts the notion that AI has reached human-level general intelligence. The tension between declaring AGI achieved while simultaneously insisting engineers must aggressively leverage AI tools reveals the gap between current capabilities and true artificial general intelligence.🟡 smiles while holding a microphone against a green background.
Summarized by
Navi
[4]