3 Sources
3 Sources
[1]
The votes are in: AI will hurt elections and relationships
Latest report from Stanford's AI boffins finds unsafe usage practices, widespread anxiety about impacts, and China catching up to the USA Artificial intelligence has achieved mass adoption faster than the personal computer or the internet, reaching 53 percent of the population in just three years. The number of harmful AI incidents has increased correspondingly. And both experts and laypeople believe the impact will be felt in two areas: Elections and relationships. According to the 2026 AI Index Report [PDF], from Stanford University's Institute for Human-Centered Artificial Intelligence (HAI), "Responsible AI is not keeping pace with AI capability, with safety benchmarks lagging and incidents rising sharply." Documented AI incidents - defined as "harms or near harms realized in the real world by the deployment of artificial intelligence systems" by the AI Incident Database - reached 362 in 2025, up from 233 in 2024, the report says. That coincides with an increase in AI adoption: 88 percent of organizations say they're using AI and about 80 percent of university students admit as much. One possible explanation for that finding is that AI models have become quite good at programming, with scores on the SWE-bench test of success tackling real-world GitHub issues rising from 60 percent to close to 100 percent in the space of a year. High scores on a particular benchmark don't tell the full story because all AI models tend to be deficient in different areas. On the AA-Omniscient Index, designed to assess whether models will admit when they're unsure about something instead of just guessing, hallucination rates across 26 models varied from 22 percent to 94 percent. When attorneys use AI models to make "over two dozen fake citations and misrepresentations of fact," and get called out for it by the US Sixth Circuit Court of Appeals, that's an example of what the Stanford HAI researchers mean when they say responsible AI hasn't kept pace with usage. And despite all the talk about AI superintelligence, AI lags behind people when it comes to telling time - OpenAI's GPT-5.4 High managed to read analog clocks correctly just 50.6 percent of the time as of March 2026, compared to about 90 percent for "unspecialized humans," as described in the ClockBench benchmark [PDF]. Robots demonstrate even less competence, succeeding in only 12 percent of household tasks, based on the BEHAVIOR-1K simulation benchmark. The HAI report, at 423 pages, represents the Stanford group's summary of the current state of AI research and its impact on society. Written by human researchers with help from ChatGPT and Claude, not to mention financial support from Google, OpenAI, and others, the report's findings extend beyond the scarcity of "responsible AI" to touch on various aspects of the AI industry. In terms of public opinion, the report finds "AI experts and the US public disagree on nearly everything about AI's future, except that it will hurt elections and personal relationships." Sixty-four percent of the American public expect AI will reduce the number of jobs available to humans over the next two decades, while five percent foresee AI creating more jobs. Only 39 percent of experts anticipate fewer jobs while 19 percent of experts project more employment. Experts, however, believe that generative AI will contribute to 80 percent of US work hours by 2030, compared to the public's prediction of 10 percent. Just 31 percent of US respondents said they trust in their government to regulate AI responsibly, the lowest level of any country. With OpenAI backing an Illinois state bill that would limit the liability of AI companies in the event their models cause catastrophic harm, and the White House pursuing an "industry-friendly AI policy," it's not difficult to see how Americans might have doubts about their government's interest in protecting them. The HAI report observes that Chinese AI models have closed the performance gap with US AI models. As of March 2026, the top US model, Claude Opus 4.6 scored 1,503 on the Arena benchmark, just 2.7 percentage points above ByteDance's Dola-Seed Preview at 1,464. That lead had narrowed as of April 9, 2026, with Claude Opus 4.6 Thinking at 1,548, closely followed by Z.ai's GLM-5.1 at 1,530. The US continues to lead in AI investment, said to have reached $285.9 billion in 2025. That's 23 times more than the $12.4 billion invested in China, though the report notes it may have under-counted government funding. Even so, the US is losing technical talent. "The number of AI researchers and developers moving to the US has dropped 89 percent since 2017, with an 80 percent decline in the last year alone," the report finds. ®
[2]
Stanford: China 'effectively' closes US AI model performance gap
AI productivity gains are often seen in the same sectors that cut entry-level jobs. The US is no longer leading the AI model race, says the latest Stanford AI Index report, which finds that the performance gap between the two countries' models has "effectively closed". The AI Index is an initiative at the Stanford Institute for Human-centered Artificial Intelligence. In its eighth edition last year, it found that even though Chinese AI models were fast catching up in performance, the US was still placed as the clear leader in the race. That, however, has changed. Since early 2025, several Chinese models have taken the lead over their American counterparts, with China's DeepSeek-R1 marking the first large instance in February. Models from Chinese companies such as Alibaba, Zhipu and MiniMax have since continuously ranked high in leaderboards. The US, however, continues to be AI's biggest backer, still producing more "top-tier" AI models and high-impact patents, while China is leading the game in volume, industrial robot installations, citations and patent output. Private AI investment in the US reached around $285bn in 2025, with nearly 2,000 newly-funded AI companies forming in the year's time. The country also hosts the most number of AI data centres. Talent loss AI has undoubtedly cemented its presence in society. Stanford reports that AI has reached mass adoption faster than the personal computer or the internet. Generative AI has already been adopted by more than 50pc of the population, with numbers sitting at 61pc in Singapore, 54pc in United Arab Emirates, and around 28pc in the US. The technology is fast accelerating in capabilities, reaching more of the population than ever before. Many of the notable AI models released last year can meet or exceed human baselines on PhD-level science questions, multimodal reasoning, and competition mathematics, the report finds, creating challenging circumstances for job seekers. AI models purpose-built for science can outperform human scientists in many cases, it adds. On the flip side, the report finds connections between a decline in entry-level employment and productivity gains. The software development sector, which shows the clearest markers of productivity gains, saw a 20pc decline in US-based employees aged 22 to 25 years old. Senior positions with older developers are growing in count, meanwhile. Despite the massive investments, the US is struggling to attract global talent, with an 80pc drop in AI researchers and developers choosing to move to the country. Responsibility takes a backseat The report notes that responsible AI is not keeping up with AI capability, pointing to a lag in safety benchmarks and "spotty" reporting on benchmarks. Documented AI incidents rose to 362, up from 233 in 2024. Meanwhile, a recent study found that improving AI safety can affect model accuracy, adding to the challenge of improving model safety. The report also touches on AI sovereignty, calling it a "defining feature of national policies". The EU, for one, launched the AI Continent Action Plan last April, promising to enhance AI infrastructure and reduce dependence for its technological needs. However, newer open source developments - most notably, Open Claw - are helping redistribute who participates in the AI race. Technology firms are taking advantage of the widely accessible open source models by creating their own versions of OpenClaw with enhanced security. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[3]
China has erased the US lead in AI, Stanford HAI's 2026 AI index reveals - SiliconANGLE
China has erased the US lead in AI, Stanford HAI's 2026 AI index reveals Stanford University researchers today released their highly anticipated 2026 AI Index Report, revealing a global landscape where artificial intelligence technology is being adopted at record-breaking pace, even as public trust in AI oversight and transparency hits new lows. The report by the Stanford Institute for Human-Centered Artificial Intelligence, known as Stanford HAI, is now into its ninth year. It's a comprehensive annual study that tracks the dizzying evolution of the AI industry, documenting a world in which America's lead over Chinese innovation has all but evaporated, and where the technology is already reshaping global workforces and changing the course of scientific discovery. One of the most striking - and potentially concerning - takeaways from this year's report is the way China has reportedly erased the AI performance gap between itself and the U.S. In previous year's reports, the U.S. had always held a solid lead over Chinese innovators, but now the countries are neck-and-neck, with U.S. and Chinese models constantly trading places at the top of benchmarks ranking AI performance. Although the U.S. maintains a significant edge in terms of capital, infrastructure buildout and AI chips, China now holds sway in other key areas, such as patents, publications and autonomous robotics development, also known as "physical AI." However, the report notes that it's no longer a two-horse race, with other nations also striving to be seen as "AI superpowers." These include South Korea, which has emerged as the world's leader in terms of "innovation density," filing more patents per capita than any other country. As these countries all scramble for AI supremacy, the issue of "sovereignty" has become a top policy priority for many governments. A number of European and Central Asian countries have invested significantly in their AI infrastructure over the last year, bringing the number of nations with "state-backed supercomputing clusters" to 44. However, the push for sovereign AI is not universal. South American and Middle Eastern nations lag far behind. According to Stanford's researchers, this could lead to a new kind of "digital divide," where those nations that struggle to shape AI development are less likely to see the economic benefits. More than 90% of all notable AI models are now created by private companies, and Stanford's researchers warn that this is leading to less transparency than before. Concerns about AI "black boxes" are nothing new, but the most powerful new models being released today are even more mysterious than their predecessors. According to the report, AI leaders including Google LLC, Anthropic PBC and OpenAI Group PBC have all abandoned the practice of disclosing their latest model's dataset sizes and training duration. Moreover, 80 of the 95 most notable models launched last year were released without their training code. Meanwhile, these leading AI companies are now trying to flex their political muscles. AI industry representatives have become pervasive in AI congressional hearings, with their share of witnesses tripling since 2017, while the presence of neutral academics has plummeted. This shift, perhaps unsurprisingly, comes at a time when public trust in AI hits a new low. The report found that just 31% of U.S. citizens now trust their government to regulate AI properly, which was the lowest score of all surveyed nations except China, where just 27% of people trust their government. EU citizens remain much more confident, with 53% of people voicing confidence. There are also concerns about hardware supply chains, with almost the entire global AI industry still being dependent on a single chipmaking foundry operated by Taiwan Semiconductor Manufacturing Co. in Taiwan. The adoption of generative AI has grown faster than any other technology in history, the report found. Some 53% of the world's population now uses it regularly, outpacing the pace of innovations such as personal computers, the internet and smartphones. But opinions of the technology are mixed, with 59% saying it provides more benefits than drawbacks, and 52% saying it makes them nervous. Of concern, perhaps, is that while the U.S. leads in AI development, it only ranks 24th globally in terms of adoption, with just 28.3% of Americans using generative AI regularly. That compares with China, Malaysia, Thailand, Indonesia and Singapore, where more than 80% of people expect AI to have a profound impact on their lives within the next three to five years. The economic impact of AI is staggering too: Since 2013, corporate investment has increased by 40-fold, while the consumer surplus associated with generative AI in the U.S. rose to $172 billion this year. Another highlight of the report is the growing "vibe shift" between experts and the general public. While 73% of AI experts are optimistic about the technology's impact on jobs, just 23% of the public shares that belief. The skepticism of average citizens does seem justified, though, as the report notes that employment among younger workers in "AI-exposed fields" has already started to decline. In addition, the report touches on the physical costs of AI's incredible growth. The industry's energy and water demands are becoming worryingly excessive. For instance, xAI Corp. is estimated to have created more than 72,000 tons of CO2 just to train its latest model, Grok 4. Meanwhile, the amount of water required for GPT-4o inference workloads is said to be enough to sustain 12 million people. Finally, there are concerns about AI's impact on science, particularly in terms of its scope. Though AI tools have helped to make individual scientists three times more productive, this appears to be happening at the expense of the scope of AI research, which increasingly favors data-rich topics, meaning less diversity than before.
Share
Share
Copy Link
Stanford University's 2026 AI Index Report shows China has effectively closed the AI model performance gap with the US, with Chinese models now competing at the top of global benchmarks. The report documents a sharp rise in AI-related incidents to 362 in 2025, while public trust in government AI regulation hits historic lows and the US struggles to retain technical talent.
The 2026 AI Index Report from Stanford University's Institute for Human-Centered Artificial Intelligence reveals a dramatic shift in the global AI landscape: China US AI competition has reached parity, with Chinese models now matching their American counterparts in performance
1
2
. As of March 2026, the top US model, Claude Opus 4.6, scored 1,503 on the Arena benchmark, just 2.7 percentage points above ByteDance's Dola-Seed Preview at 1,4641
. By April 9, 2026, that AI model performance gap had narrowed further, with Claude Opus 4.6 Thinking at 1,548, closely followed by Z.ai's GLM-5.1 at 1,5301
.
Source: SiliconANGLE
Chinese models from Alibaba, Zhipu, and MiniMax have continuously ranked high in leaderboards, with DeepSeek-R1 marking the first major instance of Chinese leadership in February 2025
2
. While the US still produces more top-tier AI models and high-impact patents, China leads in volume, industrial robot installations, citations, and patent output2
.Artificial intelligence has achieved mass adoption faster than the personal computer or the internet, reaching 53 percent of the population in just three years
1
.
Source: Silicon Republic
Global AI adoption varies significantly by region, with 61 percent in Singapore, 54 percent in the United Arab Emirates, and around 28 percent in the US
2
. Generative AI has been adopted by more than 50 percent of the population globally2
. Among organizations, 88 percent report using AI, while approximately 80 percent of university students admit to using the technology1
. The consumer surplus associated with generative AI in the US rose to $172 billion this year, while corporate AI investment has increased 40-fold since 20133
.Responsible AI is not keeping pace with AI capability, with safety benchmarks lagging and incidents rising sharply, according to the 423-page report
1
. Documented AI incidents reached 362 in 2025, up from 233 in 20241
2
. These incidents are defined as harms or near harms realized in the real world by the deployment of AI systems1
.
Source: The Register
Hallucination rates across 26 models varied dramatically from 22 percent to 94 percent on the AA-Omniscient Index, which assesses whether models admit uncertainty instead of guessing
1
. Real-world examples include attorneys using AI models to generate over two dozen fake citations and misrepresentations of fact, prompting criticism from the US Sixth Circuit Court of Appeals1
. Recent studies found that improving AI safety can negatively affect model accuracy, adding to the challenge of advancing model safety2
.Just 31 percent of US respondents said they trust their government to regulate AI responsibly, the lowest level of any country except China at 27 percent
1
3
. Public trust remains higher in the EU, where 53 percent of citizens voice confidence in AI regulation3
. This erosion of confidence coincides with OpenAI backing an Illinois state bill that would limit AI company liability in catastrophic harm events, and the White House pursuing industry-friendly AI policy1
. Political influence by AI companies has grown substantially, with AI industry representatives tripling their share of witnesses in congressional hearings since 2017, while neutral academic presence has plummeted3
. More than 90 percent of all notable AI models are now created by private companies, leading to reduced transparency3
. Leading AI companies including Google, Anthropic, and OpenAI have abandoned disclosing dataset sizes and training duration for their latest models, while 80 of the 95 most notable models launched last year were released without training code3
.Related Stories
AI experts and the US public disagree on nearly everything about AI's future, except that it will have a negative impact on elections and personal relationships
1
. Sixty-four percent of the American public expect AI will reduce job markets over the next two decades, while only five percent foresee AI creating more jobs1
. Among experts, 39 percent anticipate fewer jobs while 19 percent project more employment1
. The software development sector, showing the clearest markers of AI productivity gains, experienced a 20 percent decline in entry-level jobs for US-based employees aged 22 to 25, while senior positions with older developers are growing2
. AI models on the SWE-bench test of success tackling real-world GitHub issues saw scores rise from 60 percent to close to 100 percent in just one year1
. Experts believe generative AI will contribute to 80 percent of US work hours by 2030, compared to the public's prediction of 10 percent1
.The US continues to lead in AI investment, reaching $285.9 billion in 2025, which is 23 times more than the $12.4 billion invested in China, though the report notes potential undercounting of government funding
1
2
. Nearly 2,000 newly-funded AI companies formed in the US during 2025, and the country hosts the most AI data centers2
. Despite this financial dominance, the US is losing its ability to attract global AI talent. The number of AI researchers and developers moving to the US has dropped 89 percent since 2017, with an 80 percent decline in the last year alone1
2
. This talent exodus occurs as sovereign AI becomes a defining feature of national policies2
. The EU launched the AI Continent Action Plan in April, promising to enhance AI infrastructure and reduce technological dependence2
. The number of nations with state-backed supercomputing clusters has reached 44, though South American and Middle Eastern nations lag behind, potentially creating a new digital divide3
. South Korea has emerged as the world leader in innovation density, filing more AI patents per capita than any other country3
. Open-source AI developments like Open Claw are helping redistribute participation in the AI race, with technology firms creating enhanced security versions2
.Summarized by
Navi
[1]
[2]
07 Apr 2025•Technology

07 Apr 2025•Technology

23 Jan 2026•Policy and Regulation
