11 Sources
[1]
Developer survey shows trust in AI coding tools is falling as usage rises
AI tools are widely used by software developers, but they and their managers are still grappling with figuring out how exactly to best put them to use, with growing pains emerging along the way. That's the takeaway from the latest survey of 49,000 professional developers by community and information hub StackOverflow, which itself has been heavily impacted by the addition of large language models (LLMs) to developer workflows. The survey found that four in five developers use AI tools in their workflow in 2025 -- a portion that has been rapidly growing in recent years. That said, "trust in the accuracy of AI has fallen from 40 percent in previous years to just 29 percent this year." The disparity between those two metrics illustrates the evolving and complex impact of AI tools like GitHub Copilot or Cursor on the profession. There's relatively little debate among developers that the tools are or ought to be useful, but people are still figuring out what the best applications (and limits) are. When asked what their top frustration with AI tools was, 45 percent of respondents said they struggled with "AI solutions that are almost right, but not quite" -- the single largest reported problem. That's because unlike outputs that are clearly wrong, these can introduce insidious bugs or other problems that are difficult to immediately identify and relatively time-consuming to troubleshoot, especially for junior developers who approached the work with a false sense of confidence thanks to their reliance on AI. As a result, more than a third of the developers in the survey "report that some of their visits to Stack Overflow are a result of AI-related issues." That is to say, code suggestions they accepted from an LLM-based tool introduced problems they then had to turn to other people to solve. Even as major improvements have recently come via reasoning-optimized models, that close-but-not-quite unreliability is unlikely to ever vanish completely; it's endemic to the very nature of how the predictive technology works. That's why 72 percent of the survey participants said that "vibe coding" is not part of their professional work; some feel it's too unreliable, and it can introduce hard-to-debug issues that are not appropriate for production. Why devs use the tools anyway So given all that skepticism and frustration, why are devs still using the tools? Well, in some cases, their managers are trying to force them to. But more commonly, it's because the tools are still clearly useful -- it's just important not to misapply them. It's important that managers and individual contributors bring AI tools into the workflow alongside robust training to ensure a deep understanding of best practices so the tools aren't misused in a way that creates more problems than they solve or that wastes more time than it saves. Developers need to be less trusting of things like Copilot autocomplete suggestions, treating them more as a starting point rather than just hitting tab and moving on. Tools like that are best suited for a sort of limited pair programming relationship: asking the LLM to find problems or suggest more elegant solutions that you take into critical consideration, not to suggest complete methods that you take at face value. They can also be useful for learning. The opportunity to always be learning by continually building familiarity with new languages, frameworks, or methodologies is one of the things that draws some people to the job, and LLMs can reduce friction in that process by answering questions in a more targeted way than is possible with laborious searches through often incomplete technical documentation -- exactly the sort of thing that people have historically used StackOverflow for in the past. "Although we have seen a decline in traffic, in no way is it as dramatic as some would indicate," StackOverflow Chief Product and Technology Officer Jody Bailey said in a comment to VentureBeat. StackOverflow plans to commit some of its resources both to expanding AI tool literacy and to fostering community discussions that help solve issues that are specific to workflows that involve those tools.
[2]
For programmers, even as AI adoption climbs, trust wanes
Programmers are using AI more than ever, but they don't like or trust the tools very much, according to the 2025 Stack Overflow Developer Survey. The survey of almost 50,000 developers found that 84% now use or plan to use AI tools in their workflow, up from last year (76%). Over half of professional developers (51%) use these tools daily. Also: The best AI for coding in 2025 (and what not to use) Such figures might suggest that programmers must love AI. However, only 60% expressed positive sentiment toward AI tools, a proportion down from over 70% in both 2023 and 2024. Distrust is a defining theme of the survey. In 2024, 43% of developers felt good about AI accuracy, and only 31% were skeptical. By 2025, 33% of developers trusted AI tool outputs, 46% expressed active distrust, and a mere 3% said they highly trusted the results. Among seasoned professionals, the "highly trust" figure dropped to just 2.6%, with 20% reporting strong skepticism. Also: Bad vibes: How an AI agent coded its way to disaster In short, developers certainly use AI, but trusting the technology to get the job right on its own is another matter. And after IT leader Jason Lemkin's experience with a Vibe programming project that disintegrated, taking with it a production database, who can blame them? The Stack Overflow study also found that the biggest single frustration, cited by 66% of developers, is dealing with "AI solutions that are almost right, but not quite," which often leads to the second-biggest frustration, at 45%, being "Debugging AI-generated code is more time-consuming." As Bill Harding, CEO of Amplenote and GitClear, noted in GitClear's AI Copilot Code Quality study, which analyzed 211 million lines of code, "developers trust the current generation of AI assistants about as much as we trusted the previous generation, i.e., not much." It's not just programmers who don't trust AI. A recent survey of more than 1,100 Americans revealed that only 8.5% said they "always trust" the information they receive from Google's AI Overviews, and 21% said they have zero trust in the feature's ability to surface accurate information. A new KPMG study found that, worldwide, 66% of people use AI, but only 46% trust AI systems. Also: How to get rid of AI Overviews in Google Search: 4 easy ways Junior developers who trust AI the most could be making a big mistake. A popular blog post by Namanyay Goel, an independent developer, warned: "We're trading deep understanding for quick fixes, and while it feels great in the moment, we're going to pay for this later." Worries over AI-created technical debt are growing. Harding warned that if companies continue to measure developer productivity by the simple-minded metric of number of commits or lines of code written, AI-driven technical debt will spiral out of control. "Leaders need to recognize that more code is often worse," he said, suggesting that copying and pasting code leads to higher defect rates. Indeed, GitClear found a direct connection between the rising defect rate and AI adoption. All that said, Stack Overflow also revealed in its survey that OpenAI's GPT models are the most popular large language models, with 82% of developers who use AI indicating that they used them for development work in the past year. Anthropic's Claude Sonnet models came second, followed by Google's Gemini Flash. Despite AI's rise, when it comes to integrated development environments (IDEs), programmers still prefer Visual Studio (75%) and Visual Studio Code (29%) over AI-first programming IDEs. The old-school, simple code editors, Vim and Notepad++, remain popular, even among programmers who use AI. That said, Microsoft's incorporation of Copilot into its tools has proven to be a smart move. Also: Coding with AI? My top 5 tips for vetting its output - and staying out of trouble As before, JavaScript, HTML/CSS, and Python maintain their status as the most widely used programming languages. Python, and probably due to the popularity of Python-based generative AI libraries TensorFlow and PyTorch, is especially sought after by developers adopting a new language. However, Rust, with an 83% approval mark, remains the most admired language. Looking ahead, while AI is being adopted quickly, developers are, if anything, growing cautious about handing off critical tasks to agents. A vast majority (75%) said human advice is still irreplaceable in scenarios where they don't trust AI's output. As for AI agents, they have yet to reach mainstream acceptance. Over half of the survey's respondents use simpler AI tools, and 38% have no plans to adopt agents soon.
[3]
Most developers use AI in their daily workflows - but they don't trust it, study finds
Programmers are using AI more than ever, but they don't like or trust the tools very much, according to the 2025 Stack Overflow Developer Survey. The survey of almost 50,000 developers found that 84% now use or plan to use AI tools in their workflow, up from last year (76%). Over half of professional developers (51%) use these tools daily. Also: The best AI for coding in 2025 (and what not to use) Such figures might suggest that programmers must love AI. However, only 60% expressed positive sentiment toward AI tools, a proportion down from over 70% in both 2023 and 2024. Distrust is a defining theme of the survey. In 2024, 43% of developers felt good about AI accuracy, and only 31% were skeptical. By 2025, 33% of developers trusted AI tool outputs, 46% expressed active distrust, and a mere 3% said they highly trusted the results. Among seasoned professionals, the "highly trust" figure dropped to just 2.6%, with 20% reporting strong skepticism. Also: Bad vibes: How an AI agent coded its way to disaster In short, developers certainly use AI, but trusting the technology to get the job right on its own is another matter. And after IT leader Jason Lemkin's experience with a Vibe programming project that disintegrated, taking with it a production database, who can blame them? The Stack Overflow study also found that the biggest single frustration, cited by 66% of developers, is dealing with "AI solutions that are almost right, but not quite," which often leads to the second-biggest frustration, at 45%, being "Debugging AI-generated code is more time-consuming." As Bill Harding, CEO of Amplenote and GitClear, noted in GitClear's AI Copilot Code Quality study, which analyzed 211 million lines of code, "developers trust the current generation of AI assistants about as much as we trusted the previous generation, i.e., not much." It's not just programmers who don't trust AI. A recent survey of more than 1,100 Americans revealed that only 8.5% said they "always trust" the information they receive from Google's AI Overviews, and 21% said they have zero trust in the feature's ability to surface accurate information. A new KPMG study found that, worldwide, 66% of people use AI, but only 46% trust AI systems. Also: How to get rid of AI Overviews in Google Search: 4 easy ways Junior developers who trust AI the most could be making a big mistake. A popular blog post by Namanyay Goel, an independent developer, warned: "We're trading deep understanding for quick fixes, and while it feels great in the moment, we're going to pay for this later." Worries over AI-created technical debt are growing. Harding warned that if companies continue to measure developer productivity by the simple-minded metric of number of commits or lines of code written, AI-driven technical debt will spiral out of control. "Leaders need to recognize that more code is often worse," he said, suggesting that copying and pasting code leads to higher defect rates. Indeed, GitClear found a direct connection between the rising defect rate and AI adoption. All that said, Stack Overflow also revealed in its survey that OpenAI's GPT models are the most popular large language models, with 82% of developers who use AI indicating that they used them for development work in the past year. Anthropic's Claude Sonnet models came second, followed by Google's Gemini Flash. Despite AI's rise, when it comes to integrated development environments (IDEs), programmers still prefer Visual Studio (75%) and Visual Studio Code (29%) over AI-first programming IDEs. The old-school, simple code editors, Vim and Notepad++, remain popular, even among programmers who use AI. That said, Microsoft's incorporation of Copilot into its tools has proven to be a smart move. Also: Coding with AI? My top 5 tips for vetting its output - and staying out of trouble As before, JavaScript, HTML/CSS, and Python maintain their status as the most widely used programming languages. Python, and probably due to the popularity of Python-based generative AI libraries TensorFlow and PyTorch, is especially sought after by developers adopting a new language. However, Rust, with an 83% approval mark, remains the most admired language. Looking ahead, while AI is being adopted quickly, developers are, if anything, growing cautious about handing off critical tasks to agents. A vast majority (75%) said human advice is still irreplaceable in scenarios where they don't trust AI's output. As for AI agents, they have yet to reach mainstream acceptance. Over half of the survey's respondents use simpler AI tools, and 38% have no plans to adopt agents soon.
[4]
Coders using AI tools more, trusting less: StackOverflow
Vibe coding is right out, say most respondents in Stack Overflow survey According to a new survey of worldwide software developers released on Tuesday, nearly all respondents are incorporating AI tools into their coding practices -- but they're not necessarily all that happy about it. The report, part of an annual study conducted by developer help site Stack Overflow, reveals that 78.5 percent of respondents were already using AI developer tools at least "monthly or infrequently." Another 5.3 percent of this year's respondents planned to start using AI soon. However, how these same respondents felt about these tools was significantly more split. While around 60 percent said their view toward the tools was either "favorable" or "very favorable," another 20 percent said they felt either "indifferent" or "unsure" about them. Still another 20 percent viewed the tools either unfavorably or very much so. The survey was based on 49,009 responses from across 160 countries, although the largest portion (20 percent) was based in the US. Answers were collected over a period from May 29 to June 23 of this year. Respondents ranged in age from 18 to 65 and older and included every type of programmer, from experienced professionals to those just learning how to code. Interestingly, though, use of AI tools was about equal across all levels of experience, averaging about 80 percent. So why the seeming ambivalence toward these same tools? Because a great many developers feel the tools just don't work well. Across the board, just 3.1 percent of respondents said that they "highly trust" results from AI tools, with the figure dropping to 2.5 percent among experienced developers. Those who were only learning to code had the most faith in AI, with a still-paltry 6.1 percent indicating high trust. Overall, though, approaching AI with caution appeared to be the norm. Around 44 percent of respondents said that they were either "somewhat" or "highly" distrustful of AI, and even the 31 percent who said they were "somewhat trustful" of the tools weren't exactly exuding confidence. "Complex tasks" were reportedly AI's worst weakness (although exactly what those tasks included was left to survey respondents' interpretation). AI was either "bad" or "very poor" at handling complex tasks according to 40 percent of respondents. A mere 4.4 percent said the tools handled complex tasks "very well," while 17 percent said they don't use AI for complex tasks at all. Popular sentiment says that companies are increasingly using AI to generate code that human programmers would ordinarily write. Microsoft CEO Satya Nadella has been widely quoted as saying 30 percent of Redmond's code is already attributable to AI. But Stack Overflow's survey seems to indicate this isn't typical of the broader software industry. Only 17 percent of survey respondents said that they were "currently mostly" using AI to write code, while 29 percent said that they don't plan to use it for that purpose at all. And "vibe coding," the fully AI-centric programming method that's made headlines, is right out, with 76 percent of those surveyed responding either "no" or "no, emphatically." What they do use it for is perhaps more enlightening. Replacing or supplementing traditional search engines was one popular choice, with 87 percent saying they used AI either for "searching for answers" or "learning new concepts or technologies" (or both). Many developers' reservations about AI seem to stem from what the survey defined as "frustrations" with the tech. Chief among these was 66 percent of respondents' belief that AI produced "solutions that were almost right, but not quite." What's more, 16 percent lamented that "it's hard to understand how or why the code works." This, in turn, generates further problems, with 45 percent of respondents griping that debugging AI-generated code is "more time-consuming" than for human-written code. Then there's the use of AI agents - the new buzzword du jour - in software development, but these seem to be either poorly understood or not yet widely adopted. Fully 69 percent of respondents said that they don't currently use agents in their workflows, with 38 percent of those adding that they don't plan to. Furthermore, 41 percent said that agents have had very little positive effect on their productivity. And it seems that there are still important uses for humans in the software development supply chain after all. Stack Overflow's survey reveals that 75 percent of developers would still seek a person for help in cases "when I don't trust AI answers." Ethical or security concerns about code called for human intervention according to 62 percent of respondents, and 58 percent would call upon a human "when I want to fully understand something." Similar majorities would prefer to work with people when learning best practices or simply, "When I'm stuck." This should all come as welcome news to anyone hoping to enter -- or merely survive -- the modern software industry. While some companies are even suggesting that new applications should be prepared to use AI during the application process, Stack Overflow's survey indicates that the reality for most is somewhat different. The survey showed just 4.3 percent of respondents claiming, "I don't think I'll need help from people anymore," thanks to AI. It seems, then, that for all the AI hype, anthrocentric workplaces are still here for a while. ®
[5]
AI use among software developers grows but trust remains an issue - Stack Overflow survey
84% of developers surveyed by Stack Overflow either use or plan to use AI tools, while 64% cited 'almost right' AI output and debugging AI code as their top frustrations. While the number of developers using AI keeps growing, many of them do not trust the accuracy of output from AI tools, according to the latest Stack Overflow survey of software developers. Released July 29, the 2025 Stack Overflow Developer Survey found what Stack Overflow referred to as a widening AI trust gap. For the third year in a row, the company's survey found an increasing number of developers using or planning to use AI tools year over year, 84% in 2025 versus 76% in 2024. However, 46% of the developers surveyed said they do not trust the accuracy of the output from AI tools, versus 31% last year. This year's survey included an expanded section dedicated to the growing landscape of AI, with 15 new questions on the topic. "The growing lack of trust in AI tools stood out to us as the key data point in this year's survey, especially given the increased pace of growth and adoption of these AI tools. AI is a powerful tool, but it has significant risks of misinformation or can lack complexity or relevance," said Prashanth Chandrasekar, CEO of Stack Overflow, in a statement.
[6]
Developers increasingly embrace AI tools even as their trust in them falls
The big picture: Software developers are increasingly weaving AI tools into their work, but such rapid adoption hasn't come without confusion or conflict. They and their managers are still trying to work out when these tools help, when they hurt, and how to integrate them without creating more problems than they solve. In its annual poll of 49,000 professional developers, Stack Overflow found that 80 percent use AI tools in their work in 2025, a share that has surged in recent years. Despite this wide and rapid adoption, trust in those tools is falling. The survey shows that only 29 percent of respondents say they trust AI's accuracy, down from 40 percent in past surveys. That gap - widespread adoption alongside growing skepticism - reflects the complex impact of AI-powered coding assistants like GitHub Copilot or Cursor. While few developers question their usefulness, many are still learning how to apply these tools effectively and understand their limits. When asked about their biggest frustration, 45 percent pointed to AI-generated solutions that seem mostly correct but contain subtle flaws. Unlike clearly incorrect code, these near-misses can introduce hidden bugs and logic errors that take hours to untangle - especially for junior developers who accept AI suggestions too readily. The fallout often circles back to Stack Overflow itself. More than a third of developers said they visit the site because of AI-related issues, meaning code generated by tools they trusted created problems they couldn't solve on their own. Recent advances in reasoning-focused models have improved reliability. Still, the survey suggests that AI's "close-but-not-quite" problem is here to stay - it's tied to how predictive text generation works. That's one reason 72 percent of developers reject the idea of "vibe coding," or casually pasting in AI-suggested code for production use. Even with those frustrations, few are abandoning these tools. In some cases, managers push teams to adopt them. More often, developers themselves see clear benefits - so long as they use the tools carefully. Industry experts say the key is training and mindset. Developers should treat autocomplete tools powered by AI as a "sparring partner," not a silent copilot. Those who simply tab through GitHub Copilot's suggestions risk embedding flaws; those who use it to spot issues or refine ideas get the most value. The tools also offer an educational upside. Artificial intelligence can flatten the learning curve for new languages or frameworks, offering targeted answers that complement traditional documentation searches - a function Stack Overflow has filled for years. Chief Product and Technology Officer Jody Bailey told Venture Beat that Stack Overflow is rethinking its role as AI changes how developers seek help and share knowledge. She acknowledged that the company has seen fewer visits but warned that many people overstate the narrative around that trend. "Although we have seen a decline in traffic, in no way is it as dramatic as some would indicate," she said, adding, "That shift is causing Stack Overflow to critically reassess how it gauges success in the modern digital age."
[7]
Developers remain willing but reluctant to use AI: The 2025 Developer Survey results are here
No need to bury the lede: more developers are using AI tools, but their trust in those tools is falling. The Stack Overflow Developer Survey is full of new insights about technology, tools of the trade, community, careers, and more from 49,000+ developers from around the world, and we're eager to share how the data stacks up this year. No need to bury the lede: while the adoption of AI tools continues to increase, so does developers' lack of trust in the output of those tools. The effect of AI on the developer ecosystem is everywhere in this survey, from the programming languages developers are using and want to use to the influence of AI on the tools developers used this year, as well as preferences for community platforms and content. Cracks in the foundation are showing as more developers use AI Trust but verify? Developers are frustrated, and this year's results demonstrate that the future of code is about trust, not just tools. AI tool adoption continues to climb, with 80% of developers now using them in their workflows. Yet this widespread use has not translated into confidence. In fact, trust in the accuracy of AI has fallen from 40% in previous years to just 29% this year. We've also seen positive favorability in AI decrease from 72% to 60% year over year. The cause for this shift can be found in the related data: The number-one frustration, cited by 45% of respondents, is dealing with "AI solutions that are almost right, but not quite," which often makes debugging more time-consuming. In fact, 66% of developers say they are spending more time fixing "almost-right" AI-generated code. When the code gets complicated and the stakes are high, developers turn to people. An overwhelming 75% said they would still ask another person for help when they don't trust AI's answers.69% of developers have spent time in the last year learning new coding techniques or a new programming language; 44% learned with the help of AI-enabled tools, up from 37% in 2024.36% of developers learned to code specifically for AI in the last year; developers of all experience levels are just starting to invest time in AI programming. The adoption of AI agents is far from universal. We asked if the AI agent revolution was here, and the answer is a definitive "not yet." While 52% of developers say agents have affected how they complete their work, the primary benefit is personal productivity: 69% agree they've seen an increase. When asked about "vibe coding" -- generating entire applications from prompts -- nearly 72% said it is not part of their professional work, and an additional 5% emphatically do not participate in vibe coding. This aligns with the fact that most developers (64%) do not see AI as a threat to their jobs, but they are less confident about that compared to last year (when 68% believed AI was not a threat to their job). Community is more important than ever In an era of AI-generated answers, the need for real human connection has never been more apparent. For the first time we asked about community platforms, and the results show that developers rely on a portfolio of resources, with Stack Overflow (84%), GitHub (67%), and YouTube (61%) leading the pack. This validates our vision to be the world's most vital source for technologists by providing trusted, human-verified knowledge everywhere developers work. The data shows a clear demand for this: 35% of developers use 6-10 distinct tools to get their work done, highlighting the need for seamless integration.When developers visit Stack Overflow, their top-ranked activity is reading comments, showing a deep interest in human-to-human context. It's why we're investing in features that create more ways to cultivate community and power learning.There is an emerging role for Stack Overflow: Serving as the human-verified source of truth for AI-generated code. About 35% of developers report that some of their visits to Stack Overflow are a result of AI-related issues.New tricks for old dogs: insights on technology trends in 2025 The technology-focused questions got a major upgrade this year, but standard questions regarding programming languages, operating systems, and how developers learn to code stayed the same. New questions this year feature LLM models, agentic AI tools, and top frustrations with AI. From the old guard, we see the influence of AI in a few key areas: Programming languages that are growing in popularity are also known to be AI-compatible: Python usage is up 7 percentage points, followed by Rust and Go (+2 percentage points), all of which are used in AI development and infrastructure now.Android is the preferred operating system for personal use for 29% of developers. This is an increase of 11 percentage points since last year and has boosted Android personal use above Ubuntu for the first time in the survey. While this may not be directly related to AI, Android is known for having a more open-source platform that may allow developers flexibility to cater the amount of AI and the types of AI used on this OS.Developers learning to code in the past year are continuing to use technical documentation more than other resources (68%) and are using AI tools more than they were last year (44%). New questions asked this year surface technologies that have found a niche in the AI space: Among AI agent data storage tools, separately from databases we've asked developers about since 2017, respondents show a preference for Redis (43%) alongside GitHub MCP server (43%). While Redis has been on the survey as a database option since 2017, this year it shines as the top choice for AI agent data storage.Developers are adapting their existing monitoring tools for agentic AI monitoring and observability with tools like Sentry (32%) and New Relic (13%), which have both been around for 20+ years.For the first time this year we asked about specific LLMs instead of asking about AI search and development tools generally. We see OpenAI chat models still retain the most usage among developers (81%). Anthropic's Claude Sonnet models are used more by professional developers (45%) than by those learning to code (30%).Developers are not just learning to code; they are learning to code for AI: 67% of developers indicated they were learning to code for AI in the workplace or on personal projects.Career shifts, satisfaction, and what really matters The survey reveals a developer workforce that is largely staying put, but not necessarily content. 46% of developers are "not looking" for a new job, but of those who are in a role, a combined 75% describe themselves as "complacent" or "not happy at work." Overall, there is an increase in happy developers compared to last year (24% vs. 20%). What contributes to job satisfaction? It's not just about the tech. The top drivers are "autonomy and trust," "competitive pay," and "solving real-world problems." This focus on fundamentals is also reflected in what makes developers endorse a new technology. A "reputation for quality" and a "robust and complete API" rank far higher than "AI integration," which came in second to last. The message is clear: Developers value tools that are reliable, functional, and solve real problems over those that simply ride the latest technology wave. Developer salary shows an increase in pay: This year, we have seen an increase in median pay for 20 of the roles we asked about, ranging from 5-29% compared to last year's reported salaries per developer role. The US was highest for job satisfaction (29%) and Germany was lowest (19%). We know from previous analysis that job satisfaction is tied to pay and job flexibility.Autonomy and trust at work were ranked highest for reasons to be happy at work, but competitive pay, ranked second, was frequently ranked first, too.US median salaries are higher than other countries' median salaries. For example, compared to Germany, US cloud infrastructure engineers earned a 48% higher median salary in the past year.The US recorded almost twice as many remote workers (45%) as Germany (23%).Developer insights are here This year's survey paints a picture of a community navigating the complexities of a new technological era. Developers are ready to push back on enterprise AI through a nuanced conversation about trust, reliability, and the enduring value of human expertise. The next generation of developer tools, who developers are, where they are working, and what they look for in developer communities is documented here. This data serves as a critical reminder that the future of technology will be built not just on powerful tools, but also on the trusted communities that use them.
[8]
Stack Overflow data reveals the hidden productivity tax of 'almost right' AI code
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now More developers than ever before are using AI tools to both assist and generate code. While enterprise AI adoption accelerates, new data from Stack Overflow's 2025 Developer Survey exposes a critical blind spot: the mounting technical debt created by AI tools that generate "almost right" solutions, potentially undermining the productivity gains they promise to deliver. Stack Overflow's annual developer survey is one of the largest such reports in any given year. In 2024 the report found that developers were not worried that AI would still their jobs. Somewhat ironically, Stack Overflow was initially negatively impacted by the growth of gen AI, with declining traffic and resulting layoffs in 2023. The 2025 survey of over 49,000 developers across 177 countries reveals a troubling paradox in enterprise AI adoption. AI usage continues climbing -- 84% of developers now use or plan to use AI tools, up from 76% in 2024. Yet trust in these tools has cratered. "One of the most surprising findings was a significant shift in developer preferences for AI compared to previous years, while most developers use AI, they like it less and trust it less this year," Erin Yepis, Senior Analyst for Market Research and Insights at Stack Overflow, told VentureBeat. "This response is surprising because with all of the investment in and focus on AI in tech news, I would expect that the trust would grow as the technology gets better." The numbers tell the story. Only 33% of developers trust AI accuracy in 2025, down from 43% in 2024 and 42% in 2023. AI favorability dropped from 77% in 2023 to 72% in 2024 to just 60% this year. But the survey data reveals a more urgent concern for technical decision-makers. Developers cite "AI solutions that are almost right, but not quite" as their top frustration -- 66% report this problem. Meanwhile, 45% say debugging AI-generated code takes more time than expected. AI tools promise productivity gains but may actually create new categories of technical debt. The 'almost right' phenomenon disrupts developer workflows AI tools don't just produce obviously broken code. They generate plausible solutions that require significant developer intervention to become production-ready. This creates a particularly insidious productivity problem. "AI tools seem to have a universal promise of saving time and increasing productivity, but developers are spending time addressing the unintended breakdowns in the workflow caused by AI," Yepis explained. "Most developers say AI tools do not address complexity, only 29% believed AI tools could handle complex problems this year, down from 35% last year." Unlike obviously broken code that developers quickly identify and discard, "almost right" solutions demand careful analysis. Developers must understand what's wrong and how to fix it. Many report it would be faster to write the code from scratch than to debug and correct AI-generated solutions. The workflow disruption extends beyond individual coding tasks. The survey found 54% of developers use six or more tools to complete their jobs. This adds context-switching overhead to an already complex development process. Enterprise governance frameworks trail behind adoption Rapid AI adoption has outpaced enterprise governance capabilities. Organizations now face potential security and technical debt risks they haven't fully addressed. "Vibe coding requires a higher level of trust in the AI's output, and sacrifices confidence and potential security concerns in the code for a faster turnaround," Ben Matthews, Senior Director of Engineering at Stack Overflow, told VentureBeat. Developers largely reject vibe coding for professional work, with 77% noting that it's not part of their professional development process. Yet the survey reveals gaps in how enterprises manage AI-generated code quality. Matthews warns that AI coding tools powered by LLMs can and do produce mistakes. He noted that while knowledgeable developers are able to identify and test vulnerable code themselves, LLMs are sometimes simply unable to even register any mistakes they may produce. Security risks compound these quality issues. The survey data shows that when developers would still turn to humans for coding help, 61.7% cite "ethical or security concerns about code" as a key reason. This suggests that AI tools introduce integration challenges around data access, performance and security that organizations are still learning to manage. Developers still use Stack Overflow and other human sources of expertise Despite declining trust, developers aren't abandoning AI tools. They're developing more sophisticated strategies for integrating them into workflows. The survey shows 69% of developers spent time learning new coding techniques or programming languages in the past year. Of these, 44% used AI-enabled tools for learning, up from 37% in 2024. Even with the rise of vibe coding and AI, the survey data shows that developers maintain strong connections to human expertise and community resources. Stack Overflow remains the top community platform at 84% usage. GitHub follows at 67% and YouTube at 61%. Most tellingly, 89% of developers visit Stack Overflow multiple times per month. Among these, 35% turn to the platform specifically after encountering issues with AI responses. "Although we have seen a decline in traffic, in no way is it as dramatic as some would indicate," Jody Bailey, Chief Product & Technology Officer, told VentureBeat. That said, Bailey did admit that times change and the day-to-day needs of users are not the same as they were 16 years ago when Stack Overflow got started. He noted that there is not a single site or company not seeing a shift in where users come from or how they are now engaging with gen AI tools. That shift is causing Stack Overflow to critically reassess how it gauges success in the modern digital age. "The future vitality of the internet and the broader tech ecosystem will no longer be solely defined by metrics of success outlined in the 90s or early 00s," Bailey said. "Instead, the emphasis is increasingly on the caliber of data, the reliability of information, and the incredibly vital role of expert communities and individuals in meticulously creating, sharing and curating knowledge. " Strategic recommendations for technical decision-makers The Stack Overflow data suggests several key considerations for enterprise teams evaluating AI development tools. Invest in debugging and code review capabilities: With 45% of developers reporting increased debugging time for AI code, organizations need stronger code review processes. They need debugging tools specifically designed for AI-generated solutions. Maintain human expertise pipelines: Continued reliance on community platforms and human consultation shows that AI tools amplify rather than replace the need for experienced developers. These experts can identify and correct AI-generated code issues. Implement staged AI adoption: Successful AI adoption requires careful integration with existing tools and processes rather than wholesale replacement of development workflows. This allows developers to leverage AI strengths while mitigating "almost right" solution risks. Focus on AI tool literacy: Developers using AI tools daily show 88% favorability compared to 64% for weekly users. This suggests proper training and integration strategies significantly impact outcomes. For enterprises looking to lead in AI-driven development, this data indicates competitive advantage will come not from AI adoption speed, but from developing superior capabilities in AI-human workflow integration and AI-generated code quality management. Organizations that solve the "almost right" problem,turning AI tools into reliable productivity multipliers rather than sources of technical debt,will gain significant advantages in development speed and code quality.
[9]
Developers are finding it hard to trust AI - and not just because it could steal their jobs
As developers become more accustomed to AI tools in their workflows, it's becoming increasingly clear that they don't always trust its output, new research has claimed. The latest developer survey from Stack Overflow has revealed although AI adoption is up to 84% from 76% in 2024, ther has also been a huge jump in the number of developers who don't trust AI-generated results, up from 31% in 2024 to 46% in 2025. On the flip side, only 3.1% highly trust AI results - a sentiment that's more common among beginners (6.1%) than it is among experienced devs (2.5%). Currently, as many as 78.5% of developers use AI on an infrequent basis, such as monthly, and this is consistent across all experience levels according to the study. Despite the lack of trust, it's clear that developers see artificial intelligence as a useful starting point, with as many as three in five viewing AI tools favorably compared with just one in five who see it unfavorably (and a further one in five who are indifferent). But that's all it is at the moment - a starting point. Three-quarters admitted that they'd still ask a human when they don't trust AI answers, with 58% preferring to ask humans when they don't fully understand something and a similar number seeking human help for ethical and security concerns. "AI is a powerful tool, but it has significant risks of misinformation or can lack complexity or relevance," Stack Overflow CEO Prashanth Chandrasekar explained. While its use cases in the development cycle may be more limited, artificial intelligence is proving to be useful in other areas - 44% use it to learn to code (up from 37% last year), and 36% use it for work or advancement. "By providing a trusted human intelligence layer in the age of AI, we believe the tech enthusiasts of today can play a larger role in adding value to build the AI technologies and products of tomorrow," Chandrasekar added.
[10]
Developer trust in AI tools is falling, survey finds
While more developers are using AI tools, a new survey shows their trust in them is declining due to frustration. While the use of AI development tools continues to grow, developers' trust in the accuracy and reliability of those tools is actively declining. According to the results of the 2025 Stack Overflow Developer Survey, which gathered insights from over 49,000 developers worldwide, the developer ecosystem is navigating a complex relationship with AI. The findings reveal a significant paradox: while more developers are incorporating AI into their workflows, their confidence in the output is falling, leading to widespread frustration and a renewed emphasis on human-verified knowledge. The survey data shows a clear divergence between the adoption of AI tools and the trust developers place in them. While 80% of developers now report using AI tools in their development process, trust in the accuracy of these tools has fallen from 40% in previous years to just 29% this year. Similarly, positive favorability toward AI has decreased from 72% to 60% year-over-year. The primary cause of this growing skepticism appears to be the practical challenge of working with imperfect AI-generated code. The number-one frustration, cited by 45% of respondents, is dealing with "AI solutions that are almost right, but not quite". This issue often makes debugging more difficult and time-consuming. In fact, 66% of developers state that they are spending more time fixing this "almost-right" code. This inefficiency and lack of reliability means that when the stakes are high, developers still turn to people. An overwhelming 75% of developers said they would ask another person for help when they do not trust an AI-generated answer. The survey also indicates that the "AI agent revolution" has not yet arrived in professional workflows. While 52% of developers say agents have affected how they complete their work, the primary benefit remains personal productivity, with 69% agreeing they have seen an increase. However, the concept of "vibe coding" -- generating entire applications from prompts -- is not a common professional practice, with nearly 72% stating it is not part of their work. This reality aligns with job security perceptions; 64% of developers do not see AI as a threat to their jobs, though this is a slight decrease in confidence from 68% in the previous year. In an environment saturated with AI-generated answers, the need for human connection and verified knowledge has become more apparent. The survey found that developers rely on a portfolio of community resources, with Stack Overflow (84%), GitHub (67%), and YouTube (61%) leading the pack. The top-ranked activity for developers visiting Stack Overflow is reading comments, which shows a deep interest in human-to-human context. The platform is also taking on an emerging role as a source of truth for AI-related problems, with about 35% of developers reporting that some of their visits are the result of an AI issue. The influence of AI is also visible in technology trends. Programming languages known to be compatible with AI development are growing in popularity, including Python (usage up 7 percentage points), Rust, and Go (both up 2 percentage points). When it comes to Large Language Models, OpenAI's chat models are the most used at 81%, while Anthropic's Claude models see higher usage among professional developers (45%) compared to those learning to code (30%). Regarding career satisfaction, the survey reveals a workforce that is largely staying in place but is not necessarily content. A combined 75% of developers in a role describe themselves as "complacent" or "not happy at work," although overall happiness did see a slight increase from 20% last year to 24% this year. The top drivers of job satisfaction were identified as "autonomy and trust," "competitive pay," and "solving real-world problems." Notably, when asked what makes them endorse a new technology, developers ranked "AI integration" second to last, far below factors like a "reputation for quality" and a "robust and complete API."
[11]
Stack Overflow Survey Reveals Developers Still Use AI Tools Despite Frustrations, Citing Manager Pressure And Practical Benefits - But Warn Against Misuse And Costly Coding Mistakes
While AI tools have now become a staple when it comes to modern software development, managers and developers still have to figure out how to unlock their full potential. These efforts are not free from challenges. A large-scale survey conducted by Stack Overflow denotes the same challenges, as the main insight from it is that developers are still looking into how to use the tools more effectively. LLMs are said to be rapidly changing the entire software development process, and tools like ChatGPT or Copilot tend to affect not only developers but sites like Stack Overflow meant for coding assistance, since many can get their answers from the chatbots instead of these dedicated and specific platforms. Stack Overflow surveyed 49,000 professional developers, which was meant to see how deeply AI has been integrated into coding workflows. Previously, the platform used to be the go-to tool for developers, but now it has seen disruptions amidst the growing impact of large language models (LLMs) that are transforming the way developers write and debug code. As per the latest report, four out of five developers now rely on AI tools for their work. Although there is an increased reliance on the tools for improving workflows, many are skeptical of overly relying on the models, as the trust in the AI-generated responses has seen a drop from 40 percent to 29 percent within a year. If we see this disparity, it highlights how, while the usage of the AI tools is increasingly widespread, the trust in the technology is declining due to the complex impact that AI tools such as GitHub Copilot and Cursor tend to have on software development professions. While the majority understands that these tools are here to stay, they are unsure of how to use them best and where the boundaries are to be drawn. When the developers were asked what they find frustrating about these AI tools, most reported that the tools' accuracy and reliability make them question them. Normally, incorrect codes are easier to detect, but if they are posed with bugs and other errors that are hard to detect, it becomes rather time-consuming to fix them. The situation is more grave with the junior developers who place a lot of trust in these AI-generated codes and, as a result, end up feeling confident about the results that could otherwise turn out to be wrong and hard to fix. Even in the survey insights, this problem was highlighted as many developers said they visited Stack Overflow after experiencing some issue with the AI tools. This means that while they initially relied on the LLM models, they encountered problems later on that required help from a wider developer community. Despite the advancement in AI, these problems seem to be some fundamental limitations and hence cannot be eliminated entirely. There will always remain some uncertainty due to the models generating codes based on patterns. Developers are still using these AI tools despite the skepticism around them because managers are pushing for wider adoption and also because the models are useful; they only have to be used wisely.
Share
Copy Link
The 2025 Stack Overflow Developer Survey shows a significant increase in AI tool usage among developers, but a concurrent decline in trust regarding the accuracy and reliability of these tools.
The 2025 Stack Overflow Developer Survey, encompassing responses from nearly 50,000 developers across 160 countries, reveals a significant surge in AI tool adoption within the software development community. According to the survey, 84% of developers now use or plan to use AI tools in their workflow, a notable increase from 76% in the previous year 12. This widespread adoption spans across all experience levels, with approximately 80% of developers incorporating AI tools regardless of their seniority 4.
Source: TechSpot
Despite the increasing usage, the survey highlights a growing skepticism among developers regarding the accuracy and reliability of AI-generated outputs. Trust in AI accuracy has fallen dramatically, from 40% in previous years to just 29% in 2025 1. This "AI trust gap" is even more pronounced among experienced developers, with only 2.6% expressing high trust in AI tool outputs 23.
Source: The Register
The survey identified several major pain points that developers face when using AI coding tools:
Almost Right Solutions: 66% of developers cited dealing with "AI solutions that are almost right, but not quite" as their biggest frustration 23.
Time-Consuming Debugging: 45% of respondents reported that debugging AI-generated code is more time-consuming than human-written code 23.
Lack of Deep Understanding: Some developers expressed concerns about trading deep understanding for quick fixes, potentially leading to future technical debt 2.
The survey results suggest that while AI tools are widely used, they are not replacing traditional development practices entirely:
IDE Preferences: Developers still prefer traditional IDEs like Visual Studio (75%) and Visual Studio Code (29%) over AI-first programming environments 23.
Language Popularity: JavaScript, HTML/CSS, and Python remain the most widely used programming languages, with Rust maintaining its status as the most admired language 23.
Human Expertise: 75% of developers stated that human advice is irreplaceable in scenarios where they don't trust AI's output 23.
As AI adoption in software development continues to grow, several concerns and trends are emerging:
Source: ZDNet
Cautious Approach to AI Agents: While AI tool usage is high, developers are more cautious about adopting AI agents. Over half of the survey respondents use simpler AI tools, and 38% have no immediate plans to adopt agents 23.
Vibe Coding Rejection: 76% of surveyed developers explicitly rejected "vibe coding," an AI-centric programming method that has gained recent attention 4.
The survey results have significant implications for the software development industry:
Need for AI Literacy: There's a growing need for expanded AI tool literacy and fostering community discussions to address issues specific to AI-integrated workflows 1.
Balancing AI and Human Expertise: Companies need to find the right balance between leveraging AI tools and maintaining human expertise to ensure code quality and deep understanding of systems 24.
Evolving Development Practices: As AI tools become more prevalent, development practices and productivity metrics may need to evolve to account for the unique challenges and opportunities presented by AI-assisted coding 24.
In conclusion, while AI has become an integral part of many developers' workflows, the industry is still grappling with how to best utilize these tools while maintaining code quality, deep understanding, and trust in the development process.
Summarized by
Navi
[4]
Elon Musk's xAI plans to sue Apple for allegedly favoring OpenAI's ChatGPT in App Store rankings, claiming antitrust violations. The dispute highlights growing tensions in the AI app market and raises questions about fair competition in digital marketplaces.
33 Sources
Policy and Regulation
11 hrs ago
33 Sources
Policy and Regulation
11 hrs ago
GitHub CEO Thomas Dohmke resigns, leading to the integration of the platform into Microsoft's CoreAI team. This marks the end of GitHub's independence and signals a shift towards AI-focused development.
12 Sources
Business and Economy
19 hrs ago
12 Sources
Business and Economy
19 hrs ago
Nvidia announces new Cosmos and Nemotron AI models, along with advanced infrastructure for robotics and physical AI applications, showcasing significant advancements in AI reasoning and world modeling capabilities.
6 Sources
Technology
19 hrs ago
6 Sources
Technology
19 hrs ago
Reddit restricts the Internet Archive's Wayback Machine from indexing most of its content to prevent AI companies from scraping user data without permission, highlighting the growing importance of data licensing in the AI era.
13 Sources
Technology
18 hrs ago
13 Sources
Technology
18 hrs ago
NVIDIA announces the integration of RTX Pro 6000 Blackwell Server Edition GPUs into 2U rack mount servers, offering enhanced AI performance and efficiency for enterprise data centers.
4 Sources
Technology
19 hrs ago
4 Sources
Technology
19 hrs ago