8 Sources
[1]
Sam Altman says the Singularity is imminent - here's why
In a new blog post, Altman laid out his vision for a hugely prosperous future powered by superintelligent AI. We'll figure things out as we go along, he argues. In his 2005 book "The Singularity is Near," the futurist Ray Kurzweil predicted that the Singularity -- the moment in which machine intelligence surpasses our own -- would occur around the year 2045. Sam Altman believes it's much closer. In a blog post published Tuesday, the OpenAI CEO delivered a homily devoted to what he views as the imminent arrival of artificial "superintelligence." Whereas artificial general intelligence, or AGI, is usually defined as a computer system able to match or outperform humans on any cognitive task, a superintelligent AI would go much further, overshadowing our own intelligence to such a vast degree that we'd be helpless to fathom it, like snails trying to understand general relativity. (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Crucially, the blog post frames this supposedly inevitable arrival of superintelligent AI as one that will happen gradually enough for society to prepare itself. By comparison, he looks back at the past five years, a relatively short period of time in which most people have gone from knowing nothing about AI to using powerful tools like ChatGPT on a daily basis, to the point where generative AI has become almost mundane. Also: Forget AGI - Meta is going after 'superintelligence' now "Humanity is close to building digital superintelligence, and at least so far it's much less weird than it seems like it should be," Altman wrote. His writing veers at times into quasi-religious territory, portraying the arrival of superintelligent AI in terms reminiscent of early Christian prophets describing the Second Coming. While his language is cloaked in scientific, secular language, it's hard not to detect just a hint of proselytizing: "the 2030s are likely going to be wildly different from any time that has come before," he writes. "We do not know how far beyond human-level intelligence we can go, but we are about to find out." AGI -- and by extension, artificial superintelligence -- has been a divisive subject in the tech world. Like Altman, many believe its arrival is not a matter of if, but when. Meta is reportedly preparing to launch an internal research lab devoted to building superintelligence. Others doubt that it's even technically possible to build a machine that's more advanced than the human brain. OpenAI catapulted to global fame following its release of ChatGPT in late 2022. The company, since then, has been shipping new AI products at a breakneck pace, prompting a steady trickle of employees to depart, citing concerns that safety was being deprioritized in the name of speed. Many have joined Anthropic, an AI company that was actually founded by former OpenAI employees, or have gone on to launch their own ventures. Ilya Sutskever, for example -- a cofounder of OpenAI and its former chief scientist -- founded a company called Safe Superintelligence (SSI) last June. AI developers in general have also been widely criticized for their rush to automate human labor without offering any kind of concrete policy proposals for what millions of job-displaced people in the future ought to do with themselves. Altman's new blog post echoes a refrain that's become common among tech leaders on this front: Yes, there will be some job losses, but ultimately the technology will create entirely new categories of jobs to replace those that have been automated; and besides, AI is going to generate so much wealth for humanity at large that people will have the freedom to pursue more meaningful things than work. (Just exactly what those things are is never made quite clear.) Altman has also supported the idea of implementing a universal basic income to support the masses as the world adjusts to his vision of a techno-utopia. Also: What AI pioneer Yoshua Bengio is doing next to make AI safer "The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything," he wrote in the blog post. "There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we'll be able to seriously entertain new policy ideas we never could before." OpenAI has evolved quickly and dramatically since it was founded a decade ago. It began as a nonprofit, aimed -- as its name suggests -- at open-source AI research. It's since become a profit-raising behemoth competing with the likes of Google and Meta. The company's mission statement has long been "to ensure that artificial general intelligence benefits all of humanity." Now, judging from Altman's blog post, OpenAI seems to be aiming for an even loftier goal: "before anything else, we are a superintelligence research company."
[2]
Sam Altman's outrageous 'Singularity' blog perfectly sums up AI in 2025
Sam Altman has been a blogger far longer than he's been in the AI business. Now the CEO of OpenAI, Altman began his blog -- titled simply, if concerningly, "Sam Altman" -- in 2013. He was in year 3 of working at the startup accelerator YCombinator at the time, and would soon be promoted to president. The first page of posts contains no references to AI. Instead we get musings on B2B startup tools, basic dinner party conversation openers, and UFOs (Altman was a skeptic). Then there was this sudden insight: "The most successful founders do not set out to create companies," Altman wrote. "They are on a mission to create something closer to a religion." Fast-forward to Altman's latest 2025 blog post, "The Gentle Singularity" -- and, well, it's hard not to say mission accomplished. "We are past the event horizon; the takeoff has started," is how Altman opens, and the tone only gets more messianic from there. "Humanity is close to building digital superintelligence." Can I get a hallelujah? To be clear, the science does not suggest humanity is close to building digital superintelligence, a.k.a. Artificial General Intelligence. The evidence says we have built models that can be very useful in crunching giant amounts of information in some ways, wildly wrong in others. AI hallucinations appear to be baked into the models, increasingly so with AI chatbots, and they're doing damage in the real world. There are no advances in reasoning, as was made plain in a paper also published this week: AI models sometimes don't see the answer when you tell them the answer. Don't tell that to Altman. He's off on a trip to the future to rival that of Ray Kurzweil, the offbeat Silicon Valley guru who first proposed we're accelerating to a technological singularity. Kurzweil set his all-change event many decades down the line. Altman is willing to risk looking wrong as soon as next year: "2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world ... It's hard to even imagine what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year." The "likely", "may," and "maybe" there are doing a lot of lifting. Altman may have "something closer to religion" in his AGI assumptions, but cannot cast reason aside completely. Indeed, shorn of the excitable sci-fi language, he's not always claiming that much (don't we already have "robots that can do tasks in the real world"?). As for his most outlandish claims, Altman has learned to preface them with a word salad that could mean anything. Take this doozy: "In some big sense, ChatGPT is already more powerful than any human who has ever lived." Can I get a citation needed? Altman's latest blog isn't all future-focused speculation. Buried within is the OpenAI CEO's first ever statement on ChatGPT's energy and water usage -- and as with his needless drama over a Scarlett Johansson-like voice , opening that Pandora's Box may not go the way Altman thinks. Since ChatGPT exploded in popularity in 2023, OpenAI -- along with main AI rivals Google and Microsoft -- have stonewalled researchers looking for details on their data center usage. "We don't even know how big models like GPT are," Sasha Luccioni, climate lead at open-source AI platform HuggingFace, told me last year. "Nothing is divulged, everything is a company secret." Altman finally divulged, kinda. In the middle of a blog post, in parentheses, with the preface "people are often curious about how much energy a ChatGPT query uses," the OpenAI CEO offers two stats: "the average query uses about 0.34 watt-hours ... and about 0.000085 gallons of water." There's no more data offered to confirm these stats; Altman doesn't even specify which model of ChatGPT. OpenAI hasn't responded to multiple follow-up requests from multiple news outlets. Altman has an obvious interest in downplaying the amount of energy and water OpenAI requires, and he's already doing it here with a little sleight-of-hand. It isn't the average query that concerns researchers like Luccioni; it's the enormous amount of energy and water required to train the models in the first place. But now he's shown himself to be responsive to the "often curious," Altman has less of a reason to stonewall. Why not release all the data so others can replicate his numbers, you know, like scientists do? Meanwhile, battles over data center energy and water usage are brewing across the US. Luccioni has started an AI Energy Leaderboard that shows how wildly open source AI models vary. This is serious stuff, because companies don't like to spend more on energy usage than they need to, and because there's buy-in. Meta and (to a lesser extent) Microsoft and Google are already on the board. Can OpenAI afford not to be? In the end, the answer depends on whether Altman is building a company or more of a religion.
[3]
Sam Altman wants "a significant fraction of the power on Earth" to run AI, and I've unlocked a new Doomsday scenario.
Altman has a dream for the future that is the stuff of my personal nightmares. AMD held its Advancing AI 2025 summit in San Jose, California, on Thursday, headlined by CEO Lisa Su's lengthy keynote address. However, it was not what Su said that alarmed me. OpenAI CEO Sam Altman joined Su on stage to wrap up the two-hour keynote, and their back-and-forth included one short exchange toward the end that now lives rent-free in my brain. Referencing the recent ChatGPT outages, Su asked Altman, "Are there ever enough GPUs?" Altman, whose OpenAI is an AMD customer, replied, "Theoretically, at some points, you can see that a significant fraction of the power on Earth should be spent running AI compute. And maybe we're going to get there." Which sounds great for the AI industry. But what about the rest of us? Okay, look. I'm not the biggest fan of AI usage. One of the few types of AI that I find myself interested in is Intel's on-device custom RAG AI Assistant Builder. It's a small language model that runs locally and is mostly useful for the kind of busywork no one has time for. Cloud-based AI data centers fuel my nightmares. The current pace of Artificial Intelligence growth is unsustainable. While Altman claims that users shouldn't be worried about ChatGPT's energy cost, adding AI usage on top of other sources of environmental pollution puts more pressure on a planet that was in dire straits before OpenAI programmed its first chatbot. AI is not just an industry built on destroying the foundations of its own existence, though that is certainly an economic crisis in the making. It's also incredibly damaging to the environment and to the education of future generations. AI-generated deepfakes are getting so good that most people can't tell real footage from AI video, leading us into a "future of AI control." Without guardrails, unchecked artificial intelligence growth is very much an ouroboros. AI has already negatively impacted most industries. Even if you accept that AI will eventually replace human workers in most roles, then who will buy the products? AI won't be selling laptops, phones, or software to itself. Throw in the ecological and educational damage AI systems can cause, and it's a bleak future we're looking at. Thankfully, large language model (LLM) AI systems like ChatGPT are becoming increasingly efficient over time. So they might hit critical mass soon, and we won't need to reopen Three Mile Island to run virtual assistants. There are also plenty of companies looking into sustainability solutions for AI systems. But is it worth it? We still haven't gotten the "better Siri" we were promised. Agentic AI just sounds like a new way to commit to total social isolation. There may never be a "killer app" for AI.
[4]
Sam Altman Says "Significant Fraction" of Earth's Total Electricity Should Go to Running AI
During a recent public appearance, OpenAI CEO Sam Altman admitted that he wants a large chunk of the world's power grid to help him run artificial intelligence models. As Laptop Mag flagged, he dropped that bomb dropped during AMD's AI conference last week after Lisa Su, the CEO of the hosting firm who counts Altman as a client and friend, mentioned ChatGPT's recent outages. Though OpenAI hasn't revealed the exact causes of its massive June outage, there's a good chance it had to do with running out of computing power. This seems all the more probable given that Altman admitted earlier this year that the company had run out of graphics processing units or GPUs, the high-end computer chips that AMD sells and companies like OpenAI use to power their large language models (LLMs). Speaking to that likelihood, Su asked Altman, "are there ever going to be enough GPUs?" With a chuckle, the inscrutable executive paused before responding -- and then essentially said the quiet part out loud. "Theoretically, at some points, you can see that a significant fraction of the power on Earth should be spent running AI compute," Altman said. "And maybe we're going to get there." To reiterate: the CEO of the world's largest AI company said he believes a "significant fraction" of the electricity on this planet should be used to run AI -- and said so to the CEO of a company whose GPUs he recently committed to purchasing, too. Though Su moved on quickly from the exchange, the undercurrent beneath what Altman admitted is, to paraphrase Laptop Mag's Madeline Ricchiuto, low key nightmare fuel. Perhaps most upsetting about Altman's flippant admission is the environmental impact he so casually ignored. Conventional electric generation often relies on the combustion of fossil fuels, which have been killing our planet since way before OpenAI was a twinkle in Altman's eye. Add in a new electricity-guzzling industry like AI to a power grid already stretched to the brink, and you've got a serious problem -- one that Altman, Su, and everyone else who boosts AI seems to not want to face full-on. In a new blog post in which the OpenAI CEO claimed that the world is approaching what he calls a "gentle singularity," or the point at which artificial intelligence meets or surpasses the capabilities of humans, Altman attempted to explain how much power ChatGPT uses -- but his description fell short. "People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes," the CEO wrote. "It also uses about 0.000085 gallons of water; roughly one fifteenth of a teaspoon." ChatGPT's emissions being broken down into their smallest units, notably, obfuscates more legitimate projections of the chatbot's actual energy toll. According to a recent study conducted by the University of California, Riverside and the Washington Post, ChatGPT already uses nearly 40 million kilowatts of energy per day, which is enough to power the empire state building for about 18 months, or charge eight million smartphones. Notably, those figures don't take into account any other LLMs or AI systems, meaning the real environmental impact of AI is even greater. Despite how much of the world's energy he's already taken, Altman is saying that he and his fellow travelers will need more and more to keep their hallucination machines running -- and as Su suggested, there may never be enough GPUs to satisfy that power hunger.
[5]
OpenAI CEO Says We've Already Passed the "Superintelligence Event Horizon" - Decrypt
ChatGPT now has 800 million weekly users, who Altman said rely on the technology. Humanity may already be entering the early stages of the singularity, the point at which AI surpasses human intelligence, according to OpenAI CEO Sam Altman. In a blog post published Tuesday, Altman said humanity has crossed a critical inflection point -- an "event horizon" -- marking the beginning of a new era of digital superintelligence. "We are past the event horizon; the takeoff has started," he wrote. "Humanity is close to building digital superintelligence, and at least so far it's much less weird than it seems like it should be." Altman's analysis comes at a time when leading AI developers warn that artificial general intelligence could soon displace workers and disrupt global economies, outpacing the ability of governments and institutions to respond. The singularity is a theoretical point when artificial intelligence surpasses human intelligence, leading to rapid, unpredictable technological growth and potentially profound changes in society. An event horizon is a point of no return, beyond which the course of the object, in this case, an AI, cannot be changed. Altman argued that we're already entering a "gentle singularity" -- a gradual, manageable transition toward powerful digital superintelligence, not a sudden wrenching change. The takeoff has begun, but remains comprehensible and beneficial. As evidence of that, Altman pointed to the surge in ChatGPT's popularity since its public launch in 2022: "Hundreds of millions of people rely on it every day and for increasingly important tasks," he said. The numbers back him up. In May 2025, ChatGPT reportedly had 800 million weekly active users. Despite ongoing legal battles with authors and media outlets, as well as calls for pauses on AI development, OpenAI shows no signs of slowing down. Altman emphasized that even slight improvements in the technology could deliver substantial benefits. But a small misalignment, scaled across hundreds of millions of users, could have serious consequences. To solve for these misalignments, he suggested several points, including: Altman said the next five years are critical for AI development. "2025 has seen the arrival of agents that can do real cognitive work; writing computer code will never be the same," he said. "2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world." By 2030, Altman predicted, both intelligence and the capacity to generate and act on ideas will be widely available. "Already, we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it," he said, pointing out how quickly people shift from being impressed by AI to expecting it. As the world anticipates the rise of artificial general intelligence and the singularity, Altman believes the most astonishing breakthroughs won't feel like revolutions -- they'll feel ordinary, and the bare minimum AI players need to offer to enter the market. "This is how the singularity goes: wonders become routine, and then table stakes," he said.
[6]
'ChatGPT Is Already More Powerful Than Any Human,' OpenAI CEO Sam Altman Says
OpenAI backer Microsoft and its rivals are investing billions of dollars into AI and jockeying for users in what is becoming a more crowded landscape. Humanity could be close to successfully building an artificial super intelligence, according to Sam Altman, the CEO of ChatGPT maker OpenAI and one of the faces of the AI boom. "Robots are not yet walking the streets," Altman wrote in a blog post late Wednesday, but said "in some big sense, ChatGPT is already more powerful than any human who has ever lived." Hundreds of millions of people use AI chatbots every day, Altman said. And companies are investing billions of dollars in AI and jockeying for users in what is quickly becoming a more crowded landscape. OpenAI, backed by Microsoft (MSFT), wants to build "a new generation of AI-powered computers," and last month announced a $6.5 billion acquisition deal with that goal in mind. Meanwhile, Google parent Alphabet (GOOGL), Apple (AAPL), Meta (META), and others are rolling out new tools that integrate AI more deeply into their users' daily lives. "The 2030s are likely going to be wildly different from any time that has come before," Altman said. "We do not know how far beyond human-level intelligence we can go, but we are about to find out." Eventually, there could be robots capable of building other robots designed for tasks in the physical world, Altman suggested. In his blog post, Altman said he expects there could be "whole classes of jobs going away" as the technology develops, but that he believes "people are capable of adapting to almost anything" and that the rapid pace of technological progress could lead to policy changes. But ultimately, "in the most important ways, the 2030s may not be wildly different," Altman said, adding "people will still love their families, express their creativity, play games, and swim in lakes."
[7]
Sam Altman Says Humans are Already Past "the A.I. Event Horizon"
Sam Altman explains how A.I. will revolutionize industries and challenge societal structures in the coming decades. OpenAI CEO Sam Altman believes we are already past "the A.I. event horizon," he said in a new blog post yesterday (June 11), arguing that A.I. development is quietly reshaping civilization -- even if the shift feels subtle. "The takeoff has started. Humanity is close to building digital superintelligence, and at least so far, it's much less strange than it seems it should be," he wrote. Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters According to the OpenAI CEO, 2025 marks a pivotal shift in A.I. capabilities, particularly in coding and complex reasoning. By next year, he expects A.I. systems to begin generating original scientific ideas, with autonomous robots functioning effectively in the physical world by 2027. "In the 2030s, intelligence and energy are going to become wildly abundant. These two have long been the fundamental limiters on human progress," he wrote. "With abundant intelligence and energy (and good governance), we can theoretically have anything else." One key driver of this shift is A.I. infrastructure, such as computing power, servers and data center storage. As it becomes more automated and easier to deploy, the cost of intelligence could soon be as low as electricity. And it will supercharge scientific discovery, enable infrastructure to build itself, and unlock new frontiers in health care, materials science and space exploration. "If we can do a decade's worth of research in a year, or a month, then the rate of progress will obviously be quite different," Altman wrote. Altman also addressed a common question: how much energy does a ChatGPT query use? He revealed that a typical query consumes just 0.34 watt-hours of energy and 0.000085 gallons of water -- roughly the same amount of power an oven uses in a second and as little water as one-fifteenth of a teaspoon. While some fear that A.I. could render human labor obsolete, Altman believes that by 2030, A.I. will amplify human creativity and productivity, not replace it. "In some big sense, ChatGPT is already more powerful than any human who has ever lived. A small new capability can create a hugely positive impact," he wrote. However, Altman also acknowledged the dangers. He noted that alignment -- the challenge of ensuring A.I. systems understand and follow long-term human values -- is still unsolved. He cited social media algorithms as an example of poorly aligned A.I. systems -- tools optimized for engagement that often result in harmful societal outcomes. The real threat is not that A.I. will replace human purpose, but that society might fail to evolve the systems and policies necessary for people to thrive alongside increasingly intelligent machines. He urged global leaders to begin a serious conversation about the values and boundaries that should guide A.I. development before the technology becomes too deeply entrenched to redirect. "The sooner the world can start a conversation about what these broad bounds are and how we define collective alignment, the better," he wrote.
[8]
Sam Altman Reveals ChatGPT's Energy Bill and the Road to Superintelligence
OpenAI CEO Sam Altman recently penned a blog post titled 'The Gentle Singularity' and revealed how much energy ChatGPT uses for each query. Altman wrote that, on average, a ChatGPT query uses about 0.34 watt-hours, closer to what "an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes." Altman further noted that a ChatGPT query also uses "about 0.000085 gallons of water; roughly one fifteenth of a teaspoon." Altman went on to say that "the cost of intelligence should eventually converge to near the cost of electricity." But besides ChatGPT's energy consumption, what caught my attention was Altman's opening paragraph. He declares in a rather dramatic way that we are accelerating towards superintelligence. We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it's much less weird than it seems like it should be. It appears Altman is apprising the public that we have crossed the threshold and are moving towards transformative AI. The next paragraph gets even more interesting where he writes, "The least-likely part of the work is behind us; the scientific insights that got us to systems like GPT-4 and o3 were hard-won, but will take us very far." This suggests that current AI technologies are sufficient to take us to AGI (Artificial General Intelligence) and eventually, ASI (Artificial Superintelligence). This proposition is in direct conflict with prominent AI skeptics, including Meta AI's chief scientist, Yann LeCun, who say LLMs have hit a wall and are not capable of leading us to AGI or ASI. Altman further lays out a timeline of what's coming: Altman touched on "recursive self-improvement," an AI system that can autonomously improve itself. He writes that current AI systems are not completely autonomous, but "this is a larval version of recursive self-improvement." Current AI systems are beginning to improve the process of building better systems, which suggests that future AI may help build more advanced AI. OpenAI is already hearing from scientists that they have become two or three times more productive with the help of current AI systems. Having said that, Altman acknowledges that job displacement will lead to serious societal disruption. He writes, "There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we'll be able to seriously entertain new policy ideas we never could before." In this regard, OpenAI's former chief scientist and SSI chief, Ilya Sutskever, said the following at the University of Toronto: "Slowly but surely, or maybe not so slowly, AI will keep getting better. And the day will come when AI will do all of our... all the things that we can do, not just some of them, but all of them. Anything which I can learn, anything which any one of you can learn, the AI could do as well. How do we know this, by the way? How can I be so sure? How can I be so sure of that? The reason is that all of us have a brain, and the brain is a biological computer. That's why we have a brain. The brain is a biological computer. So, why can't the digital computer, a digital brain, do the same things? This is the one sentence summary for why AI will be able to do all those things: because we have a brain and the brain is a biological computer. And so you can start asking yourselves, what's going to happen? What's going to happen when computers can do all of our jobs? Right? Those are really big questions." From AI researchers to industry leaders, many are claiming that AI is a transformative technology and will lead to a world of abundance. However, before that future arrives, the world will likely see societal disruptions. How much of this bold vision will become reality is something remains to be seen.
Share
Copy Link
OpenAI CEO Sam Altman claims humanity is approaching a "gentle singularity," with AI superintelligence on the horizon. His predictions spark discussions on AI's societal impact, energy consumption, and ethical concerns.
OpenAI CEO Sam Altman has made waves in the tech world with his recent blog post, "The Gentle Singularity," where he boldly claims that humanity is on the brink of achieving artificial superintelligence 1. Altman's predictions have sparked intense debate about the future of AI and its potential impact on society.
Source: Decrypt
Altman argues that we have already passed the "event horizon" for AI development, suggesting that the path to superintelligence is now inevitable 5. He predicts a rapid acceleration of AI capabilities in the coming years:
This timeline is significantly more aggressive than previous predictions, such as Ray Kurzweil's 2045 estimate for the singularity 1.
One of the most controversial aspects of Altman's vision is his statement regarding AI's energy requirements. During AMD's AI conference, Altman suggested that "a significant fraction of the power on Earth should be spent running AI compute" 4. This comment has raised serious concerns about the environmental impact of AI development.
Source: LaptopMag
Critics point out that:
Altman's predictions have reignited discussions about the broader implications of advanced AI:
Not everyone shares Altman's optimism about the imminent arrival of superintelligent AI:
Source: Futurism
As the debate continues, it's clear that the rapid advancement of AI technology will have profound implications for society. Altman's vision of a "gentle singularity" presents both exciting possibilities and significant challenges that will need to be addressed as we move into this new era of artificial intelligence.
Google introduces Search Live, an AI-powered feature enabling back-and-forth voice conversations with its search engine, enhancing user interaction and information retrieval.
15 Sources
Technology
1 day ago
15 Sources
Technology
1 day ago
Microsoft is set to cut thousands of jobs, primarily in sales, as it shifts focus towards AI investments. The tech giant plans to invest $80 billion in AI infrastructure while restructuring its workforce.
13 Sources
Business and Economy
1 day ago
13 Sources
Business and Economy
1 day ago
Apple's senior VP of Hardware Technologies, Johny Srouji, reveals the company's interest in using generative AI to accelerate chip design processes, potentially revolutionizing their approach to custom silicon development.
11 Sources
Technology
16 hrs ago
11 Sources
Technology
16 hrs ago
Midjourney, known for AI image generation, has released its first AI video model, V1, allowing users to create short videos from images. This launch puts Midjourney in competition with other AI video generation tools and raises questions about copyright and pricing.
10 Sources
Technology
1 day ago
10 Sources
Technology
1 day ago
A new study reveals that AI reasoning models produce significantly higher COβ emissions compared to concise models when answering questions, highlighting the environmental impact of advanced AI technologies.
8 Sources
Technology
8 hrs ago
8 Sources
Technology
8 hrs ago