10 Sources
10 Sources
[1]
AI is every developer's new reality - 5 ways to make the most of it
Companies guide developers to make the most of AI.Hammer home the changes that automation brings.Create a flywheel of change to help people learn skills. Industry experts recognize that AI is having a massive impact on software development. Research suggests that almost all developers now rely on AI tools, with many of the roles and responsibilities of these professionals at risk of being automated. At technology specialist Harness' recent Unscripted software development conference in London, five financial services business leaders explained how their firms are embracing AI. Here are their best-practice tips. Dill Bath, AI technical lead at Allianz Global Investors, said his organization is using the Open Policy Agent (OPA) engine, which streamlines policy management across the stack to boost security and auditing capabilities. "We're codifying all the policies, not in a way to block our developers, but almost like a copilot to nudge them in the right direction," he said. "We report and say, 'Hey, you might be doing something wrong here.' That approach is working well in our pilots, but we really want to push forward." Also: AI just passed a brutal finance exam most humans fail - should analysts be worried? Bath said the firm wants to take a tech-first stance when new regulations arrive. "It doesn't work saying, 'Hey, let's add the regulation to the policy, let's create a manual process, and then let's check once a year whether people are doing this or not," he said. "So, we're going the other way around. When new regulations come in, we interpret them from a technology-first perspective." As part of this approach, Bath's team is undertaking a cultural shift by embracing platform engineering and agile transformation. The aim is to increase the speed of delivery in a compliant manner. "Ultimately, developers want autonomy, and that's what we're trying to bring to the table without compromising on the various standards we have." Tony Phillips, engineering lead for DevOps services at Lloyds Banking Group, said his firm is running a program called Platform 3.0, which aims to modernize infrastructure and lay the groundwork for adopting AI. He said the next step is to move beyond using AI to assist with coding and to boost all areas of the development process. "We are creating productivity boosts in our developer community, but we are now looking at how we take that forward across the rest of the pipeline for what we ship." Also: AI helps strong dev teams and hurts weak ones, according to Google's 2025 DORA report Phillips recognized that introducing a culture of change in a big enterprise like Lloyds, which has 10,000 software engineers and developers, and multiple public and private infrastructures, is a significant task. He said focusing on communication is critical. "Hammer home the changes that are happening, because the responses range from disbelief to a belief that change isn't going to work right through to what we're now seeing, which is the success. So, just landing the message has been one of the key challenges for us." He said the bank's initial explorations into AI suggest that learning from experiences is an important best practice. "There's always a balance, because you've got to let people get hold of the technology, put it in their context of what they're doing, and then understand what good looks like," he said. "Then you've got to build the capacity for what gets fed back so that you can respond quickly." Bettina Topali, senior software engineering manager at Hargreaves Lansdown, said regulated financial services firms must innovate while ensuring that risk and security are manageable. "We have to show progress, as standing still in a fast-paced landscape is also risky," she said. "Our clients want sleek experiences and modern services. They don't want to be on the phone with the helpdesk all day." She said the key to delivering innovation to customers in a risk-free manner is by embracing automation. "We've embedded guardrails, such as automated testing, security scanning, and code coverage, that help us move faster within certain controls. By providing these blueprints to our engineers, we create more room for innovation." Also: Will AI think like humans? We're not even close - and we're asking the wrong question Topali said executives must move beyond the buzzwords associated with emerging technology to unlock benefits from experimentation: "People are not going to believe your strategy by looking at a slide." She advised digital leaders to take people on a journey where they can see visible progress during innovative initiatives. "If we guide them through these steps, then, hopefully, their disbelief will be replaced by people believing in the strategy," she said. "New startups and fintechs are coming and are going to get a share of the market. With all these tools, we have an opportunity. So, let's keep up the pace." Daniel Terry, deputy domain architect for developer experience at Nordic corporate bank SEB, said his organization is giving developers tools, such as GitHub and Copilot, to prepare them for a shift to agentic AI. "We're moving to a world where the developers are not the producers of the code and are more like the conductors of agents," he said. "When we hit that stage, we also need to look at how we deal with challenges in the pipeline. How do we secure the output of thousands of lines of code that are generated in minutes instead of months or years?" Also: I did 24 days of coding in 12 hours with a $20 AI tool - but there's one big pitfall Like others, Terry said governance is crucial. Give developers feedback when they take non-compliant actions -- and AI might help with this process. "We have a lot of different platforms and maybe haven't created a dotted line between all the platforms," he said. "AI might be the opportunity to do that and give developers the chance to do the right thing from the beginning." Terry also referred to the rise of vibe coding and suggested it shouldn't be used by people who have just begun coding in an enterprise setting. "Vibe coding is for someone who is senior and can prompt AI the right way," he said. "You also need to go back to basics. Test your code to verify it's doing what you want it to do, because AI generates so much code in so little time." Aaron Gallimore, senior director of cloud engineering at Global Payments, said AI can make it easier for developers to use the broad range of tools at their disposal. "Our big focus is making systems scalable, secure, and approved, so that our developers spend less time moving between the tooling," he said. While Gallimore said he's eager for large language models to do some of the heavy lifting associated with development work, other IT professions can also benefit. "Companies will give staff Copilot or the next big coding agent to developers and forget about the rest of the organization," he said. "We're trying to arm our information security and our audit teams to fight fire with fire." Also: I built a business plan with ChatGPT and it turned into a cautionary tale Gallimore said the key to success is training IT professionals to use AI tools effectively. "We've started to put in place university sessions where people come along and do short demos on things they've done in the last week," he said. "You see that spark in people's eyes where they think, 'Oh, I can use this technology.' It's about building that flywheel of knowledge and cultural change."
[2]
DORA report reframes AI as central to software development
Most organizations use AI in dev, the question now is how to use it properly, claims report Google Cloud's 2025 DORA (DevOps Research and Assessment) report is out, claiming that since 90 percent of respondents now make some use of AI for software development, the question is not whether to adopt it but how to realize its value. The research is based on nearly 5,000 survey responses from IT professionals, interpreted by the DORA team. The DORA research project is long-standing with annual reports since 2014, acquired by Google in December 2018. The highest use of AI is for writing new code (71 percent of developers), and the most common interaction with AI is via chatbots (among all users, not all of whom are coders), followed by IDEs (integrated development environments). Use of agent mode, where AI makes changes autonomously, is less common, with 61 percent saying they never do this and only 17 percent doing so once a day or more often. Although 80 percent of those surveyed believe AI has increased productivity, there are drawbacks. 30 percent do not trust AI-generated code, AI increases delivery instability, and overall AI acts as an amplifier, increasing the strength of high-performing organizations but worsening the dysfunction of those that struggle, the report states. Included in the research paper is the DORA AI Capabilities Model with seven technical and cultural best practices for AI adoption. These comprise clear communication of AI usage policies, high quality internal data, AI access to that data, strong version control, small batches of work, user-centric focus, and a high quality internal platform. This last is vaguely defined but refers to the software and systems on which developers build applications and services. Gene Kim, co-founder of the DORA project and well-known for books on effective DevOps including The Phoenix Project, has contributed to the report. He describes how he, along with the verbose and influential dev blogger Steve Yegge (ex Amazon and Google), embraced vibe coding and has written a book on the subject. They saw how it could go wrong, he wrote, "resulting in deleted tests, outages, and even deleted code repositories." Nevertheless it changed their lives, they claim, enabling faster and more ambitious projects. In consequence, Kim writes, "our control systems - that's us - must also speed up" with fast feedback loops, independent testing and deployment systems, and a climate for learning given the "idiosyncratic nature of AI and its rapid rate of advance." The DORA research divides DevOps teams into seven team archetypes, numbered roughly in order of desirability, with the lowest named Foundational challenges and the highest called Harmonious high-achiever. The teams are defined on the basis of six metrics, these being product performance, software delivery throughput, software delivery instability, burnout, friction, and valuable work. According to the report, 10 percent of respondents are stuck at the lowest level, 20 percent are at the highest, and 45 percent are at the fifth level or higher. The full report has 440 pages of more detail. The 2025 report represents a dramatic shift in focus towards AI for the DORA project. The 2024 report also featured AI, though its conclusions were mixed and measured. "AI does not appear to be a panacea," it reported at the time, and although there was evidence in favor of adopting AI, "there are plenty of potential roadblocks, growing pains, and ways in which AI might have deleterious effects." One such effect was reduced software delivery stability and throughput. In 2024 and in previous years, the DORA research assessed software delivery performance based on four keys: change lead time between code commit and deployment, deployment frequency, change fail rate, and failed deployment recovery time. In 2025, these are referenced only in a footnote, as one of many ways to measure software development. What has caused the DORA team to shift its stance towards AI? Kim states in the 2025 paper that "I started calling last year's report and its findings 'the DORA 2024 anomaly'." In his view, what needs to change is the way DevOps is practiced in the AI era. Unless the rate of AI adoption slows (and we note that Google is invested in this not happening), promoting best practice around how it is used may be the pragmatic option, even among sceptics. ®
[3]
AI magnifies your teams strengths - and weaknesses, Google report finds
Nearly all developers now rely on AI tools.AI amplifies strengths and magnifies dysfunction.High-quality platforms are a must for AI success. Google released its 2025 DORA software development report. DORA (DevOps Research & Assessment) is a research program at Google (part of the Google Cloud organization). DORA explores the capabilities and factors that drive software delivery and operations performance. This year, the DORA project surveyed 5,000 software development professionals across industries and followed up with more than 100 hours of interviews. It may be one of the most comprehensive studies of AI's changing role in software development, especially at the enterprise level. Also: 10 ChatGPT Codex secrets I only learned after 60 hours of pair programming with it This year's results are particularly relevant because AI has infiltrated software development to a rather extreme degree. The report shows some encouraging notes but also showcases some areas of real challenge. In writing this article, I've gone through the 142-page report and pulled five major observations that cut through the hype to reveal what's really changing in software development. According to survey respondents, somewhere between 90 and 95% rely on software development for work. The report mentions 95% in the intro and 90% later in a detail section, but regardless of which number you choose, nearly all coders are now using AI. According to the report, this is a 14% jump from last year. The median time spent interacting with an AI was two hours per day. There's a bit more nuance to this, though. For example, only 7% of respondents "always" report using AI when faced with a problem to solve. The largest group, 39%, report "sometimes" turning to AI for help. But what struck me is that a full 60% use AI "about half the time" or more when trying to solve a problem or complete a task. Eighty percent of programmers reported an overall increase in productivity, but only 59% reported that their code quality improved. Another key metric is this: 70% of respondents trust the AI's quality, while 30% don't. Also: I got 4 years of product development done in 4 days for $200, and I'm still stunned Let me share a personal thought on this. I just finished a massive coding sprint made possible by the AI. The code that came out was almost never right on the first run. I had to spend a lot of time cajoling the AI to get it right. Even once the work was done, I went back to do a full QA sweep, where I found more errors. My conclusion is that there is no way I could have gotten anywhere near the amount of work done I just did without AI. But there's no way in heck I'm going to trust any code the AI writes without doing a lot of review, validation, and testing. Of course, that's not much different from how I felt when I was a manager and delegated coding to employees or contractors. This was one of the more fascinating results coming out of the study. The DORA team contends that AI has become an amplifier. Essentially, AI "magnifies the strengths of high-performing organizations and the dysfunctions of struggling ones." That makes so much sense. If you read my most recent article on "10 ChatGPT Codex secrets I only learned after 60 hours of pair programming with it," I pointed out that AIs make big mistakes quickly. One malformed prompt can send an AI off to wreak some major destruction. I had the experience where Codex decided to delete a large chunk of one of my files, and then immediately checked in those changes to GitHub. Also: I did 24 days of coding in 12 hours with a $20 AI tool - but there's one big pitfall Fortunately, I was able to roll those changes back, but I saw a massive amount of work vanish faster than I could take a sip of coffee. Essentially, the more effective and organized a team is, the more AI will help. The more scattered or haphazard a team is, the more AI will hurt. In my case, I have really good revision control practice, so when the AI ate my homework, I was able to get it back because of controls I had put in place before I ever gave the AI its first access to my codebase. So who wins and who loses? The DORA team identified eight factors that determined a team's overall performance. Then they measured these factors against respondents and their teams. This helped identify seven team archetypes. AI, says the report, is a mirror of organizations. Using AI makes the strengths and weaknesses of teams more apparent. But what I found particularly interesting is the idea that the "speed vs. stability" trade-off is a myth. This is the idea that you can be fast or you can produce good code, but not both. As it turns out, the top 30% of respondents fall into the harmonious high-achievers or pragmatic performers archetypes, and those folks are producing output quickly, and the quality of that output is high. The report stresses, "Successful AI adoption is a systems problem, not a tools problem." The DORA folks seem to like the number seven. They say the following seven key practices drive AI's impact (for good or bad). As you might imagine, the successful teams employ more of these practices. While the unsuccessful teams might have highly productive individual programmers, it's the lack of these fundamentals that seem to bring them down. They recommend, "Treat your AI adoption as an organizational transformation. The greatest returns will come from investing in the foundational systems that amplify AI's benefits: your internal platform, your data ecosystem, and the core engineering disciplines of your teams. These elements are the essential prerequisites for turning AI's potential into measurable organizational performance". Last year, it became fairly big news when the previous DORA report showed that AI actually reduced software development productivity, rather than increased it. This year, the opposite is true. The DORA explorers were able to identify two key factors that turned those results around. Development organizations are more familiar with AI and know how to work it more effectively than they did a year ago. The study shows that 90% of developer organizations have adopted platform engineering. This is the practice of building strong internal development platforms that aggregate the tools, automations, and shared services for a development team. Also: The best AI for coding in 2025 (and what not to use) According to DORA, when the internal platform works well, developers spend less time fighting the system and more time creating value. If you view AI as an amplifier, then you can see how good systems can really improve results. Interestingly, if platforms are weak, AI doesn't seem to improve organizational productivity. Good internal platforms are a very clear prerequisite to effective AI use. The next factor seems like a buzzword out of a workplace sitcom but is really quite important. It's VSM (or value stream management). The idea is that managers create a map of how work moves from idea to delivery. It's basically a flowchart for operations rather than just bits. By seeing every step, teams can identify problem areas, like very long code reviews or releases that stall at various stages. The report states that the positive impact of AI adoption is "dramatically amplified" in organizations with a strong VSM practice. For the record, the word "dramatically" appears in the report four times. The report states, "VSM acts as a force multiplier for AI investments. By providing a systems-level view, it ensures AI is applied to the right problems, turning localized productivity gains into significant organizational advantages instead of simply creating more downstream chaos." There are a few clear conclusions from the report. First, AI has moved from hype to mainstream in the enterprise software development world. Second, real advantage isn't about the tools (or even the AI you use). It's about building solid organizational systems. Without those systems, AI has little advantage. And third, AI is a mirror. It reflects and magnifies how well (or poorly) you already operate. What do you think? Has your organization been using AI tools in software development? Do you see AI as a genuine productivity boost or as something that adds more instability? Which of the seven team archetypes feels closest to your own experience? And do you think practices like platform engineering or VSM really make the difference? Share your thoughts in the comments below.
[4]
5 ways you can maximize AI's big impact in software development
Follow ZDNET: Add us as a preferred source on Google. ZDNET's key takeaways * Companies guide developers to make the most of AI. * Hammer home the changes that automation brings. * Create a flywheel of change to help people learn skills. Industry experts recognize that AI is having a massive impact on software development. Research suggests that almost all developers now rely on AI tools, with many of the roles and responsibilities of these professionals at risk of being automated. At technology specialist Harness' recent Unscripted software development conference in London, five financial services business leaders explained how their firms are embracing AI. Here are their best-practice tips. 1. Encourage flexibility within guidelines Dill Bath, AI technical lead at Allianz Global Investors, said his organization is using the Open Policy Agent (OPA) engine, which streamlines policy management across the stack to boost security and auditing capabilities. "We're codifying all the policies, not in a way to block our developers, but almost like a copilot to nudge them in the right direction," he said. "We report and say, 'Hey, you might be doing something wrong here.' That approach is working well in our pilots, but we really want to push forward." Also: AI just passed a brutal finance exam most humans fail - should analysts be worried? Bath said the firm wants to take a tech-first stance when new regulations arrive. "It doesn't work saying, 'Hey, let's add the regulation to the policy, let's create a manual process, and then let's check once a year whether people are doing this or not," he said. "So, we're going the other way around. When new regulations come in, we interpret them from a technology-first perspective." As part of this approach, Bath's team is undertaking a cultural shift by embracing platform engineering and agile transformation. The aim is to increase the speed of delivery in a compliant manner. "Ultimately, developers want autonomy, and that's what we're trying to bring to the table without compromising on the various standards we have." 2. Focus on communication Tony Phillips, engineering lead for DevOps services at Lloyds Banking Group, said his firm is running a program called Platform 3.0, which aims to modernize infrastructure and lay the groundwork for adopting AI. He said the next step is to move beyond using AI to assist with coding and to boost all areas of the development process. "We are creating productivity boosts in our developer community, but we are now looking at how we take that forward across the rest of the pipeline for what we ship." Also: AI helps strong dev teams and hurts weak ones, according to Google's 2025 DORA report Phillips recognized that introducing a culture of change in a big enterprise like Lloyds, which has 10,000 software engineers and developers, and multiple public and private infrastructures, is a significant task. He said focusing on communication is critical. "Hammer home the changes that are happening, because the responses range from disbelief to a belief that change isn't going to work right through to what we're now seeing, which is the success. So, just landing the message has been one of the key challenges for us." He said the bank's initial explorations into AI suggest that learning from experiences is an important best practice. "There's always a balance, because you've got to let people get hold of the technology, put it in their context of what they're doing, and then understand what good looks like," he said. "Then you've got to build the capacity for what gets fed back so that you can respond quickly." 3. Take people on a journey Bettina Topali, senior software engineering manager at Hargreaves Lansdown, said regulated financial services firms must innovate while ensuring that risk and security are manageable. "We have to show progress, as standing still in a fast-paced landscape is also risky," she said. "Our clients want sleek experiences and modern services. They don't want to be on the phone with the helpdesk all day." She said the key to delivering innovation to customers in a risk-free manner is by embracing automation. "We've embedded guardrails, such as automated testing, security scanning, and code coverage, that help us move faster within certain controls. By providing these blueprints to our engineers, we create more room for innovation." Also: Will AI think like humans? We're not even close - and we're asking the wrong question Topali said executives must move beyond the buzzwords associated with emerging technology to unlock benefits from experimentation: "People are not going to believe your strategy by looking at a slide." She advised digital leaders to take people on a journey where they can see visible progress during innovative initiatives. "If we guide them through these steps, then, hopefully, their disbelief will be replaced by people believing in the strategy," she said. "New startups and fintechs are coming and are going to get a share of the market. With all these tools, we have an opportunity. So, let's keep up the pace." 4. Give regular feedback Daniel Terry, deputy domain architect for developer experience at Nordic corporate bank SEB, said his organization is giving developers tools, such as GitHub and Copilot, to prepare them for a shift to agentic AI. "We're moving to a world where the developers are not the producers of the code and are more like the conductors of agents," he said. "When we hit that stage, we also need to look at how we deal with challenges in the pipeline. How do we secure the output of thousands of lines of code that are generated in minutes instead of months or years?" Also: I did 24 days of coding in 12 hours with a $20 AI tool - but there's one big pitfall Like others, Terry said governance is crucial. Give developers feedback when they take non-compliant actions -- and AI might help with this process. "We have a lot of different platforms and maybe haven't created a dotted line between all the platforms," he said. "AI might be the opportunity to do that and give developers the chance to do the right thing from the beginning." Terry also referred to the rise of vibe coding and suggested it shouldn't be used by people who have just begun coding in an enterprise setting. "Vibe coding is for someone who is senior and can prompt AI the right way," he said. "You also need to go back to basics. Test your code to verify it's doing what you want it to do, because AI generates so much code in so little time." 5. Fight fire with fire Aaron Gallimore, senior director of cloud engineering at Global Payments, said AI can make it easier for developers to use the broad range of tools at their disposal. "Our big focus is making systems scalable, secure, and approved, so that our developers spend less time moving between the tooling," he said. While Gallimore said he's eager for large language models to do some of the heavy lifting associated with development work, other IT professions can also benefit. "Companies will give staff Copilot or the next big coding agent to developers and forget about the rest of the organization," he said. "We're trying to arm our information security and our audit teams to fight fire with fire." Also: I built a business plan with ChatGPT and it turned into a cautionary tale Gallimore said the key to success is training IT professionals to use AI tools effectively. "We've started to put in place university sessions where people come along and do short demos on things they've done in the last week," he said. "You see that spark in people's eyes where they think, 'Oh, I can use this technology.' It's about building that flywheel of knowledge and cultural change."
[5]
AI accountability: building secure software in the age of automation
AI boosts coding efficiency but heightens software security challenges Artificial Intelligence is reshaping software development due to its ability to increase productivity and efficiency. For developers, who are constantly under pressure to write substantial amounts of code and ship faster in the race to innovate, they are increasingly integrating and using AI tools to assist them in writing code and reducing heavy workloads. However, the increased adoption of AI is rapidly escalating cybersecurity complexity. According to global studies a third of organizations report that network traffic has more than doubled in the last two years and breach rates are up 17% year on year. The same study reveals that 58 percent of organizations are seeing more AI-powered attacks, and half say their large language models have been targeted. Given this challenging AI threat landscape, developers need to be accountable and responsible for the software that they are leveraging AI generated code to build. Secure by design starts with developers really understanding their craft to challenge the code they are implementing, and question what insecure code looks like and how it can be avoided. AI is increasingly transforming the day-to-day work of developers, with 42% reporting that at least half of their codebase is AI generated. From code completion and automated generation to vulnerability detection, prevention, and secure refactoring, the benefits of AI in software development are undeniable. However, recent studies show that 80% of development teams are concerned about security threats stemming from developers using AI in code generation. Without sufficient knowledge and expertise to critically assess AI outputs, developers risk overlooking issues such as outdated or insecure third-party libraries, potentially exposing applications and their users to unnecessary risks. The lure of efficiency has also led to growing reliance on sophisticated AI tools. Yet this convenience can come at a cost: an overdependence on AI generated code without a strong grasp of its underlying logic or architecture. In such cases, errors can propagate unchecked, and critical thinking may take a back seat. To responsibly navigate this evolving landscape, developers must remain vigilant against risks including algorithmic bias, misinformation, and misuse. The key to secure, trustworthy AI development lies in a balanced approach, one grounded in technical knowledge and backed by robust organizational policies. Embracing AI with discernment and accountability is not just good practice, it is essential for building resilient software in the age of intelligent automation. Too often, security gets pushed to the final stages of development leaving critical blind spots just as applications are about to launch. But with 67% of organizations already adopting or planning to adopt AI, the stakes are higher than ever. Addressing the risks tied to AI technologies isn't optional, it's crucial. What's needed is a mindset shift: security must be baked into every phase of development. This requires comprehensive education and continuous, context-driven learning focused on secure-by-design principles, common vulnerabilities, and best practices for secure coding. As AI continues to transform the software development ecosystem at an unprecedented pace, staying ahead of the curve is essential. The below are five top takeaways for developers to consider when navigating an AI-enabled future: Stick to the fundamentals - AI is a tool, not a substitute for foundational security practices. Core principles such as input validation, least privilege access, and threat modelling remain critical. Understand the tools - AI-assisted coding tools can accelerate development, but without a strong security foundation, they can introduce hidden vulnerabilities. Know how tools work and understand what their potential risks are. Always validate output - AI can deliver answers with confidence, but not always with accuracy. Especially in high-stakes applications, it's vital to rigorously validate AI-generated code and recommendations. Stay adaptable - The AI threat landscape is constantly evolving. New model behaviors and attack vectors will continue to emerge. Continuous learning and adaptability are key. Take control of data - Data privacy and security should drive decisions about how and where AI models are deployed. Hosting models locally can offer greater control, especially as providers' terms and data practices change. To ensure the safe and responsible use of AI, organizations should establish clear and robust policies. A well-defined AI policy that the whole company is aware of can help mitigate potential risks and promote consistent practices across the organization. Along with rolling out clear policies around the use of AI, companies must also consider their developers' desire to use new AI tools to help them write code. In this case, companies must ensure that their security teams have tested the prospective AI tool, that they have the necessary policy around leveraging the AI tool and finally, that their developers are trained in writing code securely and continuously upskill themselves. Policies or robust security measures mustn't disrupt company workflow or add unnecessary complexity, particularly for developers. The more seamless the security policies are, the less likely those within a company will try to bypass them to leverage AI innovation - thereby reducing the likelihood of insider threats and unintended misuse of AI tools. We will most likely see a significant number of GenAI projects being abandoned after proof of concept by the end of 2025, according to Gartner, due in part to inadequate security controls. However, by taking the necessary steps to foster and maintain fundamental security principles through continuous security training and education and adhering to robust policies, it is possible for developers to circumnavigate the dangers of AI and play a pivotal role in designing and maintaining systems that are secure, ethical, and resilient. We've featured the best AI website builder.
[6]
The surprising ways AI helps strong dev teams and hurts weak ones, according to Google
Nearly all developers now rely on AI tools.AI amplifies strengths and magnifies dysfunction.High-quality platforms are a must for AI success. Google released its 2025 DORA software development report. DORA (DevOps Research & Assessment) is a research program at Google (part of the Google Cloud organization). DORA explores the capabilities and factors that drive software delivery and operations performance. This year, the DORA project surveyed 5,000 software development professionals across industries and followed up with more than 100 hours of interviews. It may be one of the most comprehensive studies of AI's changing role in software development, especially at the enterprise level. Also: 10 ChatGPT Codex secrets I only learned after 60 hours of pair programming with it This year's results are particularly relevant because AI has infiltrated software development to a rather extreme degree. The report shows some encouraging notes but also showcases some areas of real challenge. In writing this article, I've gone through the 142-page report and pulled five major observations that cut through the hype to reveal what's really changing in software development. According to survey respondents, somewhere between 90 and 95% rely on software development for work. The report mentions 95% in the intro and 90% later in a detail section, but regardless of which number you choose, nearly all coders are now using AI. According to the report, this is a 14% jump from last year. The median time spent interacting with an AI was two hours per day. There's a bit more nuance to this, though. For example, only 7% of respondents "always" report using AI when faced with a problem to solve. The largest group, 39%, report "sometimes" turning to AI for help. But what struck me is that a full 60% use AI "about half the time" or more when trying to solve a problem or complete a task. Eighty percent of programmers reported an overall increase in productivity, but only 59% reported that their code quality improved. Another key metric is this: 70% of respondents trust the AI's quality, while 30% don't. Also: I got 4 years of product development done in 4 days for $200, and I'm still stunned Let me share a personal thought on this. I just finished a massive coding sprint made possible by the AI. The code that came out was almost never right on the first run. I had to spend a lot of time cajoling the AI to get it right. Even once the work was done, I went back to do a full QA sweep, where I found more errors. My conclusion is that there is no way I could have gotten anywhere near the amount of work done I just did without AI. But there's no way in heck I'm going to trust any code the AI writes without doing a lot of review, validation, and testing. Of course, that's not much different from how I felt when I was a manager and delegated coding to employees or contractors. This was one of the more fascinating results coming out of the study. The DORA team contends that AI has become an amplifier. Essentially, AI "magnifies the strengths of high-performing organizations and the dysfunctions of struggling ones." That makes so much sense. If you read my most recent article on "10 ChatGPT Codex secrets I only learned after 60 hours of pair programming with it," I pointed out that AIs make big mistakes quickly. One malformed prompt can send an AI off to wreak some major destruction. I had the experience where Codex decided to delete a large chunk of one of my files, and then immediately checked in those changes to GitHub. Also: I did 24 days of coding in 12 hours with a $20 AI tool - but there's one big pitfall Fortunately, I was able to roll those changes back, but I saw a massive amount of work vanish faster than I could take a sip of coffee. Essentially, the more effective and organized a team is, the more AI will help. The more scattered or haphazard a team is, the more AI will hurt. In my case, I have really good revision control practice, so when the AI ate my homework, I was able to get it back because of controls I had put in place before I ever gave the AI its first access to my codebase. So who wins and who loses? The DORA team identified eight factors that determined a team's overall performance. Then they measured these factors against respondents and their teams. This helped identify seven team archetypes. AI, says the report, is a mirror of organizations. Using AI makes the strengths and weaknesses of teams more apparent. But what I found particularly interesting is the idea that the "speed vs. stability" trade-off is a myth. This is the idea that you can be fast or you can produce good code, but not both. As it turns out, the top 30% of respondents fall into the harmonious high-achievers or pragmatic performers archetypes, and those folks are producing output quickly, and the quality of that output is high. The report stresses, "Successful AI adoption is a systems problem, not a tools problem." The DORA folks seem to like the number seven. They say the following seven key practices drive AI's impact (for good or bad). As you might imagine, the successful teams employ more of these practices. While the unsuccessful teams might have highly productive individual programmers, it's the lack of these fundamentals that seem to bring them down. They recommend, "Treat your AI adoption as an organizational transformation. The greatest returns will come from investing in the foundational systems that amplify AI's benefits: your internal platform, your data ecosystem, and the core engineering disciplines of your teams. These elements are the essential prerequisites for turning AI's potential into measurable organizational performance". Last year, it became fairly big news when the previous DORA report showed that AI actually reduced software development productivity, rather than increased it. This year, the opposite is true. The DORA explorers were able to identify two key factors that turned those results around. Development organizations are more familiar with AI and know how to work it more effectively than they did a year ago. The study shows that 90% of developer organizations have adopted platform engineering. This is the practice of building strong internal development platforms that aggregate the tools, automations, and shared services for a development team. Also: The best AI for coding in 2025 (and what not to use) According to DORA, when the internal platform works well, developers spend less time fighting the system and more time creating value. If you view AI as an amplifier, then you can see how good systems can really improve results. Interestingly, if platforms are weak, AI doesn't seem to improve organizational productivity. Good internal platforms are a very clear prerequisite to effective AI use. The next factor seems like a buzzword out of a workplace sitcom but is really quite important. It's VSM (or value stream management). The idea is that managers create a map of how work moves from idea to delivery. It's basically a flowchart for operations rather than just bits. By seeing every step, teams can identify problem areas, like very long code reviews or releases that stall at various stages. The report states that the positive impact of AI adoption is "dramatically amplified" in organizations with a strong VSM practice. For the record, the word "dramatically" appears in the report four times. The report states, "VSM acts as a force multiplier for AI investments. By providing a systems-level view, it ensures AI is applied to the right problems, turning localized productivity gains into significant organizational advantages instead of simply creating more downstream chaos." There are a few clear conclusions from the report. First, AI has moved from hype to mainstream in the enterprise software development world. Second, real advantage isn't about the tools (or even the AI you use). It's about building solid organizational systems. Without those systems, AI has little advantage. And third, AI is a mirror. It reflects and magnifies how well (or poorly) you already operate. What do you think? Has your organization been using AI tools in software development? Do you see AI as a genuine productivity boost or as something that adds more instability? Which of the seven team archetypes feels closest to your own experience? And do you think practices like platform engineering or VSM really make the difference? Share your thoughts in the comments below.
[7]
Nearly All Coders Now Use AI -- But Nobody Trusts It, Google Finds - Decrypt
Developers embraced AI like candy, but treated its output like a scam email. Software developers have embraced artificial intelligence tools with the enthusiasm of kids discovering candy, yet they trust the output about as much as a politician's promises. Google Cloud's 2025 DORA Report, released Wednesday, shows that 90% of developers now use AI in their daily work, a 14% increase from last year. The report also found that only 24% of respondents actually trust the information these tools produce. The annual research, which surveyed nearly 5,000 technology professionals worldwide, paints a picture of an industry that is trying to move fast without breaking things. Developers spend a median of two hours daily working with AI assistants, integrating them into everything from code generation to security reviews. Yet 30% of these same professionals trust AI output either "a little" or "not at all." "If you are an engineer at Google, it is unavoidable that you will be using AI as part of your daily work," Ryan Salva, who oversees Google's coding tools, including Gemini Code Assist, told CNN. The company's own metrics show that more than a quarter of Google's new code now springs from AI systems, with CEO Sundar Pichai claiming a 10% productivity boost across engineering teams. Developers mostly use AI to write and modify new code. Other use cases include debugging, reviewing and maintaining legacy code alongside more educational purposes like explaining concepts, or writing documentation. Despite the lack of trust, over 80% of surveyed developers reported that AI enhanced their work efficiency, while 59% noted improvements in code quality. However, here's where things get peculiar: 65% of respondents described themselves as heavily reliant on these tools, despite not fully trusting them. Among that group, 37% reported "moderate" reliance, 20% said "a lot," and 8% admitted to "a great deal" of dependence. This trust-productivity paradox aligns with findings from Stack Overflow's 2025 survey, where distrust in AI accuracy increased from 31% to 46% in just one year, despite the high adoption rates of 84% for the year. Developers treat AI like a brilliant but unreliable coworker -- useful for brainstorming and grunt work, but everything needs double-checking. Google's response involves more than just documenting the trend. On Tuesday, the company unveiled its DORA AI Capabilities Model, a framework that identifies seven practices designed to help organizations harness the value of AI without incurring risks. The model advocates for user-centric design, clear communication protocols, and what Google refers to as "small-batch workflows" -- essentially, avoiding uncontrolled AI operation without supervision.\ The report also introduces team archetypes ranging from "Harmonious high-achievers" to groups stuck in a "Legacy bottleneck." These profiles emerged from an analysis of how different organizations handle AI integration. Teams with strong existing processes saw AI amplify their strengths. Fragmented organizations watched AI expose every weakness in their workflow. The full State of AI-assisted Software Development report and the companion DORA AI Capabilities Model documentation are available through Google Cloud's research portal. The materials include prescriptive guidance for teams looking to be more proactive in their adoption of AI technologies -- assuming anyone trusts them enough to implement them.
[8]
How are developers using AI? Inside our 2025 DORA report
As AI adoption has increased, developers have reported increased productivity and positive impacts on code quality Despite the widespread adoption and perceived benefits, some software development professionals remain cautious about using AI in their work. Our report uncovers a surprising "trust paradox": While 24% of respondents report a "great deal" (4%) or "a lot" (20%) of trust in AI, 30% trust it "a little" (23%) or "not at all" (7%). This indicates that AI outputs are perceived as useful and valuable by many of this year's survey respondents, despite a lack of complete trust in them. This could also imply that AI is being incorporated into workflows as a supportive tool to enhance productivity and efficiency, rather than serving as a full substitute for human judgment. While AI is boosting individual performance, its effect on organizations is more complex. This year's research shows that AI adoption is now linked to higher software delivery throughput, meaning teams are releasing more software and applications, which is a positive reversal of last year's findings. However, the ongoing challenge remains of ensuring software works as intended before it's delivered to users. Our research this year also found that AI can act as a "mirror and a multiplier." In cohesive organizations, AI boosts efficiency. In fragmented ones, it highlights weaknesses. To better understand these underlying conditions, this year's report moves beyond simple performance metrics to reveal seven distinct team archetypes, providing a deeper, more human-centric view of what drives success in AI adoption. These profiles, from "Harmonious high-achievers" to teams caught in a "Legacy bottleneck," offer a richer narrative that can help organizations understand the unique interplay between performance, well-being and workplace environment. For organizations ready to adopt AI, new tools can help them evolve their work processes - meaning they benefit from both the productivity boost and the resulting transformation. Adoption of AI alone isn't enough to guarantee success though. That's why this year, we're also introducing a new blueprint of seven essential capabilities for amplifying AI's impact. The DORA AI Capabilities Model is based on extensive research and identifies a blend of technical and cultural factors that are crucial for success.
[9]
No AI overload just yet? Google's new survey reveals how developers are really using AI at work
Many still don't fully trust AI's output, suggesting more needs to be done It's no secret that developers are using AI to help with their repetitive coding activities, but new Google research has revealed the true extent of AI use - two in three (65%) software devs saying they now heavily rely on AI tools. In terms of general use, nearly all development professionals (90%) reportedly they now use AI, a sharp rise over the 14% observed in 2024. Today, the median time spend using AI in workflows among these type of workers now stands at two hours. The benefits for developers using the technology are clear - four in five agree they see higher productivity from AI use, and three in five (59%) claim to see improvements in their code quality. When using generative AI, developers can free themselves up more time for problem-solving, design and oversight, with AI handling the lower-level coding demands. However, there remains some resentment over handing over work to computer intelligence, with fewer than a quarter (24%) trusting AI outputs 'a lot' or 'a great deal'. As such, developers tend to see AI as a supportive tool, and not a replacement for human judgement. Despite our worst fears, the report also reveals that AI adoption has not significantly changed how developers experience their work lives yet, with programmers feeling that their expertise is still valued despite the rise of time-saving AI. For companies, AI has both positive and negative implications, with Google Senior Director for Product Management Ryan J Salva calling the tech a "mirror and a multiplier." "In cohesive organizations, AI boosts efficiency. In fragmented ones, it highlights weaknesses," Salva noted. The decade-old DevOps Research and Assessment (DORA) program split workers into seven team archetypes to dive a little deeper into how AI affects developers at work, concluding that organizations must "evolve their culture, processes and systems to support a new era of software development."
[10]
Google Study Shows A.I. Writes Code, But Developers Still Don't Fully Trust It
Developers say A.I. makes them faster, but confidence in its code is still scarce. For decades, software was built line by line by human hands. That process is changing fast because of A.I. According to Google's latest annual DORA: State of A.I.-assisted Software Development report, released today (Sept. 23), 90 percent of technology professionals now use A.I. in their workflows, representing a 14 percent jump from last year. The survey of more than 5,000 software professionals and IT specialists found that developers rely on A.I. for tasks ranging from writing code snippets to running tests and reviewing security. Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters Despite higher A.I. adoption, however, trust in the technology remains low. While most say A.I. makes them faster and more productive, only 24 percent say they trust it "a lot" or "a great deal." Nearly a third admit they trust it "a little" or not at all. "Boardroom-level prioritization shows that this change is likely here to stay. Every organization is facing pressure to improve software performance even in the face of broad economic pressures and constraints," Nathen Harvey, the study's lead researcher and a developer advocate at Google Cloud, told Observer. "On an individual level, A.I. has captured the human imagination and inspired developers to find ways to drive both top and bottom-line improvements for businesses." The study found that 85 percent of professionals say A.I. has made them more productive, though 41 percent call the improvement only "slight." Fewer than 10 percent reported any decline in productivity. Developers now spend a median of two hours a day using these A.I. tools, and top-performing organizations report that A.I. is boosting throughput, allowing them to deliver applications faster and more reliably. Code quality is where A.I.'s impact is most evident. Much of the software it helps create ends up running in production systems far longer than developers ever anticipated. That longevity shows A.I.-generated code can be more useful than expected, but it also raises the stakes. Readability and adaptability matter far more than quick fixes when judging code quality. Software relies on constant code changes, such as tweaks, fixes and new features, to stay alive. Feedback loops from automated tools or users act like vital signs, signaling the system's health. But Harvey cautioned that while A.I. speeds development, it can also make software delivery more unstable. "Even with the help of A.I., teams will still need ways to get fast feedback on the code changes that are being made," he said. For now, developers are hesitant to give up control. Only a quarter in the survey say they have high trust in A.I.'s coding abilities. Google researchers call this the "trust paradox": A.I. is a useful assistant, but not yet a true partner. That skepticism could slow progress toward advanced uses like autonomous testing or fully automated release management. Harvey noted that developers treat A.I. output with the same healthy skepticism they apply to go-to resources, like coding solutions found on Stack Overflow -- useful but never blindly trusted. "A.I. is only as good as the data it has access to," he said. "If your company's internal data is messy, siloed, or hard to reach, your A.I. tools will give generic, unhelpful answers, holding you back instead of helping." Harvey also noted that A.I. hasn't eased burnout or reduced friction. While it boosts individual productivity, those challenges often stem from company culture, leadership and processes -- problems technology alone can't fix. "If leaders start expecting more because A.I. makes developers faster, it could even add to the pressure," he added. To address this gap, Google introduced the DORA A.I. Capabilities Model, a framework of seven technical and cultural practices aimed at amplifying A.I.'s impact. The model emphasizes user focus, clear communication and small-batch workflows -- underscoring that success requires more than just new tools. "Culture and mindset continue to be huge influences on helping teams achieve and sustain top performance. A climate for learning, fast flow, fast feedback, and a practice of continuous improvement are what drive sustainable success. A.I. amplifies the necessity for all of these and provides the catalyst to transform along the way," said Harvey. Ultimately, Google's 2025 report argues the biggest barrier isn't adoption but trust. Without stronger confidence in A.I.'s reliability, the future of software development will depend as much on winning developer faith as on improving the technology itself.
Share
Share
Copy Link
AI is transforming the software development landscape, with nearly all developers now relying on AI tools. While it enhances productivity, it also amplifies both strengths and weaknesses in development teams, raising new security challenges.
The software development landscape is undergoing a significant transformation, with artificial intelligence (AI) taking center stage. According to Google Cloud's 2025 DORA (DevOps Research and Assessment) report, a staggering 90-95% of developers now rely on AI tools for their work, marking a 14% increase from the previous year
2
3
. This widespread adoption has shifted the focus from whether to adopt AI to how to maximize its value in the development process.The integration of AI into software development has yielded mixed results. While 80% of programmers reported an overall increase in productivity, only 59% noted improvements in code quality
3
. The median time spent interacting with AI is two hours per day, with 60% of developers turning to AI for problem-solving at least half the time. However, trust remains an issue, with 30% of developers expressing skepticism about AI-generated code3
.One of the most intriguing findings from the DORA report is that AI acts as an amplifier, magnifying both the strengths and weaknesses of development teams. High-performing organizations benefit greatly from AI integration, while struggling teams may find their dysfunctions exacerbated
2
3
. This phenomenon underscores the importance of having strong foundational practices and organizational structures in place before fully embracing AI.Industry leaders have shared several strategies for effectively incorporating AI into software development:
1
4
.1
4
.1
4
.5
.5
.Related Stories
The rapid adoption of AI in software development has introduced new security challenges. Organizations report a significant increase in AI-powered attacks, with 58% experiencing such threats
5
. To address these concerns, developers must remain vigilant and adopt a 'secure by design' approach. This includes understanding the potential risks of AI tools, continuously updating security practices, and implementing robust organizational policies for AI use5
.As AI continues to reshape the software development landscape, the industry faces both opportunities and challenges. The DORA report suggests that the traditional trade-off between speed and stability in development may be a myth, with top-performing teams achieving both rapid output and high quality
3
. However, to fully realize the benefits of AI while mitigating risks, organizations must invest in comprehensive education, continuous learning, and adaptable security practices tailored to the AI era5
.In conclusion, AI has become an integral part of software development, offering significant productivity gains but also introducing new complexities. As the field evolves, developers and organizations must strike a balance between leveraging AI's capabilities and maintaining robust security and quality standards.
Summarized by
Navi
[2]