Curated by THEOUTPOST
On Wed, 12 Mar, 12:05 AM UTC
4 Sources
[1]
AGI is suddenly a dinner table topic
The concept of artificial general intelligence -- an ultra-powerful AI system we don't have yet -- can be thought of as a balloon, repeatedly inflated with hype during peaks of optimism (or fear) about its potential impact and then deflated as reality fails to meet expectations. This week, lots of news went into that AGI balloon. I'm going to tell you what it means (and probably stretch my analogy a little too far along the way). First, let's get the pesky business of defining AGI out of the way. In practice, it's a deeply hazy and changeable term shaped by the researchers or companies set on building the technology. But it usually refers to a future AI that outperforms humans on cognitive tasks. Which humans and which tasks we're talking about makes all the difference in assessing AGI's achievability, safety, and impact on labor markets, war, and society. That's why defining AGI, though an unglamorous pursuit, is not pedantic but actually quite important, as illustrated in a new paper published this week by authors from Hugging Face and Google, among others. In the absence of that definition, my advice when you hear AGI is to ask yourself what version of the nebulous term the speaker means. (Don't be afraid to ask for clarification!) Okay, on to the news. First, a new AI model from China called Manus launched last week. A promotional video for the model, which is built to handle "agentic" tasks like creating websites or performing analysis, describes it as "potentially, a glimpse into AGI." The model is doing real-world tasks on crowdsourcing platforms like Fiverr and Upwork, and the head of product at Hugging Face, an AI platform, called it "the most impressive AI tool I've ever tried." It's not clear just how impressive Manus actually is yet, but against this backdrop -- the idea of agentic AI as a stepping stone toward AGI -- it was fitting that New York Times columnist Ezra Klein dedicated his podcast on Tuesday to AGI. It also means that the concept has been moving quickly beyond AI circles and into the realm of dinner table conversation. Klein was joined by Ben Buchanan, a Georgetown professor and former special advisor for artificial intelligence in the Biden White House. They discussed lots of things -- what AGI would mean for law enforcement and national security, and why the US government finds it essential to develop AGI before China -- but the most contentious segments were about the technology's potential impact on labor markets. If AI is on the cusp of excelling at lots of cognitive tasks, Klein said, then lawmakers better start wrapping their heads around what a large-scale transition of labor from human minds to algorithms will mean for workers. He criticized Democrats for largely not having a plan. We could consider this to be inflating the fear balloon, suggesting that AGI's impact is imminent and sweeping. Following close behind and puncturing that balloon with a giant safety pin, then, is Gary Marcus, a professor of neural science at New York University and an AGI critic who wrote a rebuttal to the points made on Klein's show. Marcus points out that recent news, including the underwhelming performance of OpenAI's new ChatGPT-4.5, suggests that AGI is much more than three years away. He says core technical problems persist despite decades of research, and efforts to scale training and computing capacity have reached diminishing returns. Large language models, dominant today, may not even be the thing that unlocks AGI. He says the political domain does not need more people raising the alarm about AGI, arguing that such talk actually benefits the companies spending money to build it more than it helps the public good. Instead, we need more people questioning claims that AGI is imminent. That said, Marcus is not doubting that AGI is possible. He's merely doubting the timeline. Just after Marcus tried to deflate it, the AGI balloon got blown up again. Three influential people -- Google's former CEO Eric Schmidt, Scale AI's CEO Alexandr Wang, and director of the Center for AI Safety Dan Hendrycks -- published a paper called "Superintelligence Strategy." By "superintelligence," they mean AI that "would decisively surpass the world's best individual experts in nearly every intellectual domain," Hendrycks told me in an email. "The cognitive tasks most pertinent to safety are hacking, virology, and autonomous-AI research and development -- areas where exceeding human expertise could give rise to severe risks."
[2]
The meaning of artificial general intelligence remains unclear
Testing for AGI may not be the best measure of AI's abilities and impacts When Chinese AI startup DeepSeek burst onto the scene in January, it sparked intense chatter about its efficient and cost-effective approach to generative AI. But like its U.S. competitors, DeepSeek's main goal is murkier than just efficiency: The company aims to create the first true artificial general intelligence, or AGI. For years, AI developers -- from small startups to big tech companies -- have been racing toward this elusive endpoint. AGI, they say, would mark a critical turning point, enabling computer systems to replace human workers, making AI more trustworthy than human expertise and positioning artificial intelligence as the ultimate tool for societal advancement. Yet, years into the AI race, AGI remains a poorly defined and contentious concept. Some computer scientists and companies frame it as a threshold for AI's potential to transform society. Tech advocates suggest that once we have superintelligent computers, day-to-day life could fundamentally change, affecting work, governance and the pace of scientific discovery. But many experts are skeptical about how close we are to an AI-powered utopia and the practical utility of AGI. There's limited agreement about what AGI means, and no clear way to measure it. Some argue that AGI functions as little more than a marketing term, offering no concrete guidance on how to best use AI models or their societal impact. In tech companies' quest for AGI, the public is tasked with navigating a landscape filled with marketing hype, science fiction and actual science, says Ben Recht, a computer scientist at the University of California, Berkeley. "It becomes very tricky. That's where we get stuck." Continuing to focus on claims of imminent AGI, he says, could muddle our understanding of the technology at hand and obscure AI's current societal effects. The term "artificial general intelligence" was coined in the mid-20th century. Initially, it denoted an autonomous computer capable of performing any task a human could, including physical activities like making a cup of coffee or fixing a car. But as advancements in robotics lagged behind the rapid progress of computing, most in the AI field shifted to narrower definitions of AGI: Initially, this included AI systems that could autonomously perform tasks a human could at a computer, and more recently, machines capable of executing most of only the "economically valuable" tasks a human could handle at a computer, such as coding and writing accurate prose. Others think AGI should encompass flexible reasoning ability and autonomy when tackling a number of unspecified tasks. "The problem is that we don't know what we want," says Arseny Moskvichev, a machine learning engineer at Advanced Micro Devices and computer scientist at the Santa Fe Institute. "Because the goal is so poorly defined, there's also no roadmap for reaching it, nor reliable way to identify it." To address this uncertainty, researchers have been developing benchmark tests, similar to student exams, to evaluate how close systems are to achieving AGI. For example, in 2019, French computer scientist and former Google engineer Francois Chollet released the Abstract Reasoning Corpus for Artificial General Intelligence, or ARC-AGI. In this test, an AI model is repeatedly given some examples of colored squares arranged in different patterns on a grid. For each example set, the model is then asked to generate a new grid to complete the visual pattern, a task intended to assess flexible reasoning and the model's ability to acquire new skills outside of its training. This setup is similar to Raven's Progressive Matrices, a test of human reasoning. The test results are part of what OpenAI and other tech companies use to guide model development and assessment. Recently, OpenAI's soon-to-be released o3 model achieved vast improvement on ARC-AGI compared to previous AI models, leading some researchers to view it as a breakthrough in AGI. Others disagree. "There's nothing about ARC that's general. It's so specific and weird," Recht says. Computer scientist José Hernández-Orallo of the Universitat Politécnica de València in Spain says that it's possible ARC-AGI just assesses a model's ability to recognize images. Previous generations of language models could solve similar problems with high accuracy if the visual grids were described using text, he says. That context makes o3's results seem less novel. Plus, there's a limited number of grid configurations, and some AI models with tons of computing power at their disposal can "brute force" their way to correct responses simply by generating all possible answers and selecting the one that fits best -- effectively reducing the task to a multiple-choice problem rather than one of novel reasoning. To tackle each ARC-AGI task, o3 uses an enormous amount of computing power (and money) at test time. Operating in an efficient mode, it costs about $30 per task, Chollet says. In a less-efficient setting, one task can cost about $3,000. Just because the model can solve the problem doesn't mean it's practical or feasible to routinely use it on similarly challenging tasks. It's not just ARC-AGI that's contentious. Determining whether an AI model counts as AGI is complicated by the fact that every available test of AI ability is flawed. Just as Raven's Progressive Matrices and other IQ tests are imperfect measures of human intelligence and face constant criticism for their biases, so too do AGI evaluations, says Amelia Hardy, a computer scientist at Stanford University. "It's really hard to know that we're measuring [what] we care about." Open AI's o3, for example, correctly responded to more than a quarter of the questions in a collection of exceptionally difficult problems called the Frontier Math benchmark, says company spokesperson Lindsay McCallum. These problems take professional mathematicians hours to solve, according to the benchmark's creators. On its face, o3 seems successful. But this success may be partly due to OpenAI funding the benchmark's development and having access to the testing dataset while developing o3. Such data contamination is a continual difficulty in assessing AI models, especially for AGI, where the ability to generalize and abstract beyond training data is considered crucial. AI models can also seem to perform very well on complex tasks, like accurately responding to Ph.D.-level science questions, while failing on more basic ones, like counting the number of r's in "strawberry." This discrepancy indicates a fundamental misalignment in how these computer systems process queries and understand problems. Yet, AI developers aren't collecting and sharing the sort of information that might help researchers better gauge why, Hernández-Orallo says. Many developers provide only a single accuracy value for each benchmark, as opposed to a detailed breakdown of which types of questions a model answered correctly and incorrectly. Without additional detail, it's impossible to determine where a model is struggling, why it's succeeding, or if any single test result demonstrates a breakthrough in machine intelligence, experts say. Even if a model passes a specific, quantifiable test with flying colors, such as the bar exam or medical boards, there are few guarantees that those results will translate to expert-level human performance in messy, real-world conditions, says David Rein, a computer scientist at the nonprofit Model Evaluation and Threat Research based in Berkeley, Calif. For instance, when asked to write legal briefs, generative AI models still routinely fabricate information. Although one study of GPT-4 suggested that the chatbot could outperform human physicians in diagnosing patients, more detailed research has found that comparable AI models perform far worse than actual doctors when faced with tests that mimic real-world conditions. And no study or benchmark result indicates that current AI models should be making major governance decisions over expert humans. The benchmarks that OpenAI, DeepSeek and other companies report results from "do not tell us much about capabilities in the real world," Rein says, although they can provide reasonable information for comparing models to one another. So far, researchers have tested AI models largely by providing them with discrete problems that have known answers. However, humans don't always have the luxury of knowing what the problem before them is, whether it's solvable or in what time frame. People can identify key problems, prioritize tasks and, crucially, know when to give up. It's not yet clear that machines can or do. The most advanced "autonomous" agents struggle to navigate ordering pizza or groceries online. Large language models and neural networks have improved dramatically in recent months and years. "They're definitely useful in a lot of different ways," Recht says, pointing to the ability of newer models to summarize and digest data or produce serviceable computer code with few mistakes. But attempts like ARC-AGI to measure general ability don't necessarily clarify what AI models can and can't be used for. "I don't think it matters whether or not they're artificially generally intelligent," he says. What might matter far more, based on the recent DeepSeek news, is traditional metrics of cost per task. Utility is determined by both the quality of a tool and whether that tool is affordable enough to scale. Intelligence is only part of the equation. AGI is supposed to serve as a guiding light for AI developers. If achieved, it's meant to herald a major turning point for society, beyond which machines will function independently on equal or higher footing than humans. But so far, AI has had major societal impacts, both good and bad, without any consensus on whether we're nearing (or have already surpassed) this turning point, Recht, Hernández-Orallo and Hardy say. For example, scientists are using AI tools to create new, potentially lifesaving molecules. Yet in classrooms worldwide, generative chatbots have disrupted assessments. A recent Pew Research Center survey found that more and more U.S. teens are outsourcing assignments to ChatGPT. And a 2023 study in Nature reported that growing AI assistance in university courses has made cheating harder to detect. To say that AI will become transformative once we reach AGI ignores all the trees for the forest.
[3]
Why I'm Feeling the A.G.I.
Kevin Roose is a technology columnist and a co-host of the New York Times tech podcast "Hard Fork." Here are some things I believe about artificial intelligence: I believe that over the past several years, A.I. systems have started surpassing humans in a number of domains -- math, coding and medical diagnosis, just to name a few -- and that they're getting better every day. I believe that very soon -- probably in 2026 or 2027, but possibly as soon as this year -- one or more A.I. companies will claim they've created an artificial general intelligence, or A.G.I., which is usually defined as something like "a general-purpose A.I. system that can do almost all cognitive tasks a human can do." I believe that when A.G.I. is announced, there will be debates over definitions and arguments about whether or not it counts as "real" A.G.I., but that these mostly won't matter, because the broader point -- that we are losing our monopoly on human-level intelligence, and transitioning to a world with very powerful A.I. systems in it -- will be true. I believe that over the next decade, powerful A.I. will generate trillions of dollars in economic value and tilt the balance of political and military power toward the nations that control it -- and that most governments and big corporations already view this as obvious, as evidenced by the huge sums of money they're spending to get there first. I believe that most people and institutions are totally unprepared for the A.I. systems that exist today, let alone more powerful ones, and that there is no realistic plan at any level of government to mitigate the risks or capture the benefits of these systems. I believe that hardened A.I. skeptics -- who insist that the progress is all smoke and mirrors, and who dismiss A.G.I. as a delusional fantasy -- not only are wrong on the merits, but are giving people a false sense of security. I believe that whether you think A.G.I. will be great or terrible for humanity -- and honestly, it may be too early to say -- its arrival raises important economic, political and technological questions to which we currently have no answers. I believe that the right time to start preparing for A.G.I. is now. This may all sound crazy. But I didn't arrive at these views as a starry-eyed futurist, an investor hyping my A.I. portfolio or a guy who took too many magic mushrooms and watched "Terminator 2." I arrived at them as a journalist who has spent a lot of time talking to the engineers building powerful A.I. systems, the investors funding it and the researchers studying its effects. And I've come to believe that what's happening in A.I. right now is bigger than most people understand. In San Francisco, where I'm based, the idea of A.G.I. isn't fringe or exotic. People here talk about "feeling the A.G.I.," and building smarter-than-human A.I. systems has become the explicit goal of some of Silicon Valley's biggest companies. Every week, I meet engineers and entrepreneurs working on A.I. who tell me that change -- big change, world-shaking change, the kind of transformation we've never seen before -- is just around the corner. "Over the past year or two, what used to be called 'short timelines' (thinking that A.G.I. would probably be built this decade) has become a near-consensus," Miles Brundage, an independent A.I. policy researcher who left OpenAI last year, told me recently. Outside the Bay Area, few people have even heard of A.G.I., let alone started planning for it. And in my industry, journalists who take A.I. progress seriously still risk getting mocked as gullible dupes or industry shills. Honestly, I get the reaction. Even though we now have A.I. systems contributing to Nobel Prize-winning breakthroughs, and even though 400 million people a week are using ChatGPT, a lot of the A.I. that people encounter in their daily lives is a nuisance. I sympathize with people who see A.I. slop plastered all over their Facebook feeds, or have a clumsy interaction with a customer service chatbot and think: This is what's going to take over the world? I used to scoff at the idea, too. But I've come to believe that I was wrong. A few things have persuaded me to take A.I. progress more seriously. The insiders are alarmed. The most disorienting thing about today's A.I. industry is that the people closest to the technology -- the employees and executives of the leading A.I. labs -- tend to be the most worried about how fast it's improving. This is quite unusual. Back in 2010, when I was covering the rise of social media, nobody inside Twitter, Foursquare or Pinterest was warning that their apps could cause societal chaos. Mark Zuckerberg wasn't testing Facebook to find evidence that it could be used to create novel bioweapons, or carry out autonomous cyberattacks. But today, the people with the best information about A.I. progress -- the people building powerful A.I., who have access to more-advanced systems than the general public sees -- are telling us that big change is near. The leading A.I. companies are actively preparing for A.G.I.'s arrival, and are studying potentially scary properties of their models, such as whether they're capable of scheming and deception, in anticipation of their becoming more capable and autonomous. Sam Altman, the chief executive of OpenAI, has written that "systems that start to point to A.G.I. are coming into view." Demis Hassabis, the chief executive of Google DeepMind, has said A.G.I. is probably "three to five years away." Dario Amodei, the chief executive of Anthropic (who doesn't like the term A.G.I. but agrees with the general principle), told me last month that he believed we were a year or two away from having "a very large number of A.I. systems that are much smarter than humans at almost everything." rabbit-hole-promo Kevin Roose and Casey Newton are the hosts of Hard Fork, a podcast that makes sense of the rapidly changing world of technology. Subscribe and listen. Maybe we should discount these predictions. After all, A.I. executives stand to profit from inflated A.G.I. hype, and might have incentives to exaggerate. But lots of independent experts -- including Geoffrey Hinton and Yoshua Bengio, two of the world's most influential A.I. researchers, and Ben Buchanan, who was the Biden administration's top A.I. expert -- are saying similar things. So are a host of other prominent economists, mathematicians and national security officials. To be fair, some experts doubt that A.G.I. is imminent. But even if you ignore everyone who works at A.I. companies, or has a vested stake in the outcome, there are still enough credible independent voices with short A.G.I. timelines that we should take them seriously. The A.I. models keep getting better. To me, just as persuasive as expert opinion is the evidence that today's A.I. systems are improving quickly, in ways that are fairly obvious to anyone who uses them. In 2022, when OpenAI released ChatGPT, the leading A.I. models struggled with basic arithmetic, frequently failed at complex reasoning problems and often "hallucinated," or made up nonexistent facts. Chatbots from that era could do impressive things with the right prompting, but you'd never use one for anything critically important. Today's A.I. models are much better. Now, specialized models are putting up medalist-level scores on the International Math Olympiad, and general-purpose models have gotten so good at complex problem solving that we've had to create new, harder tests to measure their capabilities. Hallucinations and factual mistakes still happen, but they're rarer on newer models. And many businesses now trust A.I. models enough to build them into core, customer-facing functions. (The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement of news content related to A.I. systems. OpenAI and Microsoft have denied the claims.) Some of the improvement is a function of scale. In A.I., bigger models, trained using more data and processing power, tend to produce better results, and today's leading models are significantly bigger than their predecessors. But it also stems from breakthroughs that A.I. researchers have made in recent years -- most notably, the advent of "reasoning" models, which are built to take an additional computational step before giving a response. Reasoning models, which include OpenAI's o1 and DeepSeek's R1, are trained to work through complex problems, and are built using reinforcement learning -- a technique that was used to teach A.I. to play the board game Go at a superhuman level. They appear to be succeeding at things that tripped up previous models. (Just one example: GPT-4o, a standard model released by OpenAI, scored 9 percent on AIME 2024, a set of extremely hard competition math problems; o1, a reasoning model that OpenAI released several months later, scored 74 percent on the same test.) As these tools improve, they are becoming useful for many kinds of white-collar knowledge work. My colleague Ezra Klein recently wrote that the outputs of ChatGPT's Deep Research, a premium feature that produces complex analytical briefs, were "at least the median" of the human researchers he'd worked with. I've also found many uses for A.I. tools in my work. I don't use A.I. to write my columns, but I use it for lots of other things -- preparing for interviews, summarizing research papers, building personalized apps to help me with administrative tasks. None of this was possible a few years ago. And I find it implausible that anyone who uses these systems regularly for serious work could conclude that they've hit a plateau. If you really want to grasp how much better A.I. has gotten recently, talk to a programmer. A year or two ago, A.I. coding tools existed, but were aimed more at speeding up human coders than at replacing them. Today, software engineers tell me that A.I. does most of the actual coding for them, and that they increasingly feel that their job is to supervise the A.I. systems. Jared Friedman, a partner at Y Combinator, a start-up accelerator, recently said a quarter of the accelerator's current batch of start-ups were using A.I. to write nearly all their code. "A year ago, they would've built their product from scratch -- but now 95 percent of it is built by an A.I.," he said. Overpreparing is better than underpreparing. In the spirit of epistemic humility, I should say that I, and many others, could be wrong about our timelines. Maybe A.I. progress will hit a bottleneck we weren't expecting -- an energy shortage that prevents A.I. companies from building bigger data centers, or limited access to the powerful chips used to train A.I. models. Maybe today's model architectures and training techniques can't take us all the way to A.G.I., and more breakthroughs are needed. But even if A.G.I. arrives a decade later than I expect -- in 2036, rather than 2026 -- I believe we should start preparing for it now. Most of the advice I've heard for how institutions should prepare for A.G.I. boils down to things we should be doing anyway: modernizing our energy infrastructure, hardening our cybersecurity defenses, speeding up the approval pipeline for A.I.-designed drugs, writing regulations to prevent the most serious A.I. harms, teaching A.I. literacy in schools and prioritizing social and emotional development over soon-to-be-obsolete technical skills. These are all sensible ideas, with or without A.G.I. Some tech leaders worry that premature fears about A.G.I. will cause us to regulate A.I. too aggressively. But the Trump administration has signaled that it wants to speed up A.I. development, not slow it down. And enough money is being spent to create the next generation of A.I. models -- hundreds of billions of dollars, with more on the way -- that it seems unlikely that leading A.I. companies will pump the brakes voluntarily. I don't worry about individuals overpreparing for A.G.I., either. A bigger risk, I think, is that most people won't realize that powerful A.I. is here until it's staring them in the face -- eliminating their job, ensnaring them in a scam, harming them or someone they love. This is, roughly, what happened during the social media era, when we failed to recognize the risks of tools like Facebook and Twitter until they were too big and entrenched to change. That's why I believe in taking the possibility of A.G.I. seriously now, even if we don't know exactly when it will arrive or precisely what form it will take. If we're in denial -- or if we're simply not paying attention -- we could lose the chance to shape this technology when it matters most.
[4]
Inching towards AGI: How reasoning and deep research are expanding AI from statistical prediction to structured problem-solving
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More AI has evolved at an astonishing pace. What seemed like science fiction just a few years ago is now an undeniable reality. Back in 2017, my firm launched an AI Center of Excellence. AI was certainly getting better at predictive analytics and many machine learning (ML) algorithms were being used for voice recognition, spam detection, spell checking (and other applications) -- but it was early. We believed then that we were only in the first inning of the AI game. The arrival of GPT-3 and especially GPT 3.5 -- which was tuned for conversational use and served as the basis for the first ChatGPT in November 2022 -- was a dramatic turning point, now forever remembered as the "ChatGPT moment." Since then, there has been an explosion of AI capabilities from hundreds of companies. In March 2023 OpenAI released GPT-4, which promised "sparks of AGI" (artificial general intelligence). By that time, it was clear that we were well beyond the first inning. Now, it feels like we are in the final stretch of an entirely different sport. The flame of AGI Two years on, the flame of AGI is beginning to appear. On a recent episode of the Hard Fork podcast, Dario Amodei -- who has been in the AI industry for a decade, formerly as VP of research at OpenAI and now as CEO of Anthropic -- said there is a 70 to 80% chance that we will have a "very large number of AI systems that are much smarter than humans at almost everything before the end of the decade, and my guess is 2026 or 2027." The evidence for this prediction is becoming clearer. Late last summer, OpenAI launched o1 -- the first "reasoning model." They've since released o3, and other companies have rolled out their own reasoning models, including Google and, famously, DeepSeek. Reasoners use chain-of-thought (COT), breaking down complex tasks at run time into multiple logical steps, just as a human might approach a complicated task. Sophisticated AI agents including OpenAI's deep research and Google's AI co-scientist have recently appeared, portending huge changes to how research will be performed. Unlike earlier large language models (LLMs) that primarily pattern-matched from training data, reasoning models represent a fundamental shift from statistical prediction to structured problem-solving. This allows AI to tackle novel problems beyond its training, enabling genuine reasoning rather than advanced pattern recognition. I recently used Deep Research for a project and was reminded of the quote from Arthur C. Clarke: "Any sufficiently advanced technology is indistinguishable from magic." In five minutes, this AI produced what would have taken me 3 to 4 days. Was it perfect? No. Was it close? Yes, very. These agents are quickly becoming truly magical and transformative and are among the first of many similarly powerful agents that will soon come onto the market. The most common definition of AGI is a system capable of doing almost any cognitive task a human can do. These early agents of change suggest that Amodei and others who believe we are close to that level of AI sophistication could be correct, and that AGI will be here soon. This reality will lead to a great deal of change, requiring people and processes to adapt in short order. But is it really AGI? There are various scenarios that could emerge from the near-term arrival of powerful AI. It is challenging and frightening that we do not really know how this will go. New York Times columnist Ezra Klein addressed this in a recent podcast: "We are rushing toward AGI without really understanding what that is or what that means." For example, he claims there is little critical thinking or contingency planning going on around the implications and, for example, what this would truly mean for employment. Of course, there is another perspective on this uncertain future and lack of planning, as exemplified by Gary Marcus, who believes deep learning generally (and LLMs specifically) will not lead to AGI. Marcus issued what amounts to a take down of Klein's position, citing notable shortcomings in current AI technology and suggesting it is just as likely that we are a long way from AGI. Marcus may be correct, but this might also be simply an academic dispute about semantics. As an alternative to the AGI term, Amodei simply refers to "powerful AI" in his Machines of Loving Grace blog, as it conveys a similar idea without the imprecise definition, "sci-fi baggage and hype." Call it what you will, but AI is only going to grow more powerful. Playing with fire: The possible AI futures In a 60 Minutes interview, Alphabet CEO Sundar Pichai said he thought of AI as "the most profound technology humanity is working on. More profound than fire, electricity or anything that we have done in the past." That certainly fits with the growing intensity of AI discussions. Fire, like AI, was a world-changing discovery that fueled progress but demanded control to prevent catastrophe. The same delicate balance applies to AI today. A discovery of immense power, fire transformed civilization by enabling warmth, cooking, metallurgy and industry. But it also brought destruction when uncontrolled. Whether AI becomes our greatest ally or our undoing will depend on how well we manage its flames. To take this metaphor further, there are various scenarios that could soon emerge from even more powerful AI: While each of these scenarios appears plausible, it is discomforting that we really do not know which are the most likely, especially since the timeline could be short. We can see early signs of each: AI-driven automation increasing productivity, misinformation that spreads at scale, eroding trust and concerns over disingenuous models that resist their guardrails. Each scenario would cause its own adaptations for individuals, businesses, governments and society. Our lack of clarity on the trajectory for AI impact suggests that some mix of all three futures is inevitable. The rise of AI will lead to a paradox, fueling prosperity while bringing unintended consequences. Amazing breakthroughs will occur, as will accidents. Some new fields will appear with tantalizing possibilities and job prospects, while other stalwarts of the economy will fade into bankruptcy. We may not have all the answers, but the future of powerful AI and its impact on humanity is being written now. What we saw at the recent Paris AI Action Summit was a mindset of hoping for the best, which is not a smart strategy. Governments, businesses and individuals must shape AI's trajectory before it shapes us. The future of AI won't be determined by technology alone, but by the collective choices we make about how to deploy it.
Share
Share
Copy Link
As artificial intelligence rapidly advances, the concept of Artificial General Intelligence (AGI) sparks intense debate among experts, raising questions about its definition, timeline, and potential impact on society.
The concept of Artificial General Intelligence (AGI) has become a hot topic in tech circles and is increasingly entering mainstream discussions. AGI refers to a future AI system that could outperform humans on cognitive tasks 1. However, the definition of AGI remains hazy and changeable, often shaped by researchers or companies developing the technology 1.
Initially, AGI denoted an autonomous computer capable of performing any task a human could, including physical activities. As robotics lagged behind computing progress, the definition narrowed to focus on cognitive tasks 2. Today, some define AGI as AI systems that can autonomously perform most economically valuable tasks a human could handle at a computer, while others emphasize flexible reasoning ability and autonomy across unspecified tasks 2.
Several recent developments have fueled the AGI debate:
Despite these developments, skepticism remains:
The potential arrival of AGI raises significant questions about its impact on society:
Many experts argue that most people and institutions are unprepared for even current AI systems, let alone more powerful ones 3. There is growing concern about the lack of comprehensive plans at governmental levels to mitigate risks or capture benefits of these systems 3.
As the debate continues, several key points emerge:
As AI capabilities continue to expand, the conversation around AGI is likely to intensify, demanding increased attention from policymakers, researchers, and the public alike.
Reference
[1]
[2]
[3]
OpenAI CEO Sam Altman's recent statements about achieving AGI and aiming for superintelligence have ignited discussions about AI progress, timelines, and implications for the workforce and society.
20 Sources
20 Sources
Recent research reveals GPT-4's ability to pass the Turing Test, raising questions about the test's validity as a measure of artificial general intelligence and prompting discussions on the nature of AI capabilities.
3 Sources
3 Sources
A comprehensive look at the AI landscape in 2024, highlighting key developments, challenges, and future trends in the rapidly evolving field.
8 Sources
8 Sources
Google's DeepMind takes the lead in the AI race with the launch of Veo 2, outperforming OpenAI's Sora in video generation capabilities. This development, along with other AI advancements, marks a significant shift in the competitive landscape of artificial intelligence.
4 Sources
4 Sources
A comprehensive look at the latest developments in AI, including OpenAI's internal struggles, regulatory efforts, new model releases, ethical concerns, and the technology's impact on Wall Street.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved