3 Sources
3 Sources
[1]
Companies are already using agentic AI to make decisions, but governance is lagging behind
Businesses are acting fast to adopt agentic AI - artificial intelligence systems that work without human guidance - but have been much slower to put governance in place to oversee them, a new survey shows. That mismatch is a major source of risk in AI adoption. In my view, it's also a business opportunity. I'm a professor of management information systems at Drexel University's LeBow College of Business, which recently surveyed more than 500 data professionals through its Center for Applied AI & Business Analytics. We found that 41% of organizations are using agentic AI in their daily operations. These aren't just pilot projects or one-off tests. They're part of regular workflows. At the same time, governance is lagging. Only 27% of organizations say their governance frameworks are mature enough to monitor and manage these systems effectively. In this context, governance is not about regulation or unnecessary rules. It means having policies and practices that let people clearly influence how autonomous systems work, including who is responsible for decisions, how behavior is checked, and when humans should get involved. This mismatch can become a problem when autonomous systems act in real situations before anyone can intervene. For example, during a recent power outage in San Francisco, autonomous robotaxis got stuck at intersections, blocking emergency vehicles and confusing other drivers. The situation showed that even when autonomous systems behave "as designed," unexpected conditions can lead to undesirable outcomes. This raises a big question: When something goes wrong with AI, who is responsible - and who can intervene? Why governance matters When AI systems act on their own, responsibility no longer lies where organizations expect it. Decisions still happen, but ownership is harder to trace. For instance, in financial services, fraud detection systems increasingly act in real time to block suspicious activity before a human ever reviews the case. Customers often only find out when their card is declined. So, what if your card is mistakenly declined by an AI system? In that situation, the problem isn't with the technology itself - it's working as it was designed - but with accountability. Research on human-AI governance shows that problems happen when organizations don't clearly define how people and autonomous systems should work together. This lack of clarity makes it hard to know who is responsible and when they should step in. Without governance designed for autonomy, small issues can quietly snowball. Oversight becomes sporadic and trust weakens, not because systems fail outright, but because people struggle to explain or stand behind what the systems do. When humans enter the loop too late In many organizations, humans are technically "in the loop," but only after autonomous systems have already acted. People tend to get involved once a problem becomes visible - when a price looks wrong, a transaction is flagged or a customer complains. By that point, the system has already been decided, and human review becomes corrective rather than supervisory. Late intervention can limit the fallout from individual decisions, but it rarely clarifies who is accountable. Outcomes may be corrected, yet responsibility remains unclear. Recent guidance shows that when authority is unclear, human oversight becomes informal and inconsistent. The problem is not human involvement, but timing. Without governance designed upfront, people act as a safety valve rather than as accountable decision-makers. How governance determines who moves ahead Agentic AI often brings fast, early results, especially when tasks are first automated. Our survey found that many companies see these early benefits. But as autonomous systems grow, organizations often add manual checks and approval steps to manage risk. Over time, what was once simple slowly becomes more complicated. Decision-making slows down, work-arounds increase, and the benefits of automation fade. This happens not because the technology stops working, but because people never fully trust autonomous systems. This slowdown doesn't have to happen. Our survey shows a clear difference: Many organizations see early gains from autonomous AI, but those with stronger governance are much more likely to turn those gains into long-term results, such as greater efficiency and revenue growth. The key difference isn't ambition or technical skills, but being prepared. Good governance does not limit autonomy. It makes it workable by clarifying who owns decisions, how systems function is monitored, and when people should intervene. International guidance from the OECD - the Organization for Economic Cooperation and Development - emphasizes this point: Accountability and human oversight need to be designed into AI systems from the start, not added later. Rather than slowing innovation, governance creates the confidence organizations need to extend autonomy instead of quietly pulling it back. The next advantage is smarter governance The next competitive advantage in AI will not come from faster adoption, but from smarter governance. As autonomous systems take on more responsibility, success will belong to organizations that clearly define ownership, oversight and intervention from the start. In the era of agentic AI, confidence will accrue to the organizations that govern best, not simply those that adopt first.
[2]
Why effective AI governance is becoming a growth strategy
Clear accountability, transparency, fairness and integrity must be built into everyday workflows, system design and decision-making rather than left as policy statements. Business leaders often talk about artificial intelligence (AI) governance as if it's a speed bump on the road to high-impact innovation, slowing everything down while competitors sprint ahead unfettered. The truth is that governance provides the traction for acceleration while keeping your business on the road and from veering off-course. Getting governance right from the start helps you drive in the fast lane and stay there. And without good governance, AI initiatives tend to fragment. They can get stuck in data silos, incomplete processes, inadequate monitoring, undefined roles, duplication of effort and inefficient use of resources. The benefits you seek can quickly transform into potentially negative consequences. As AI moves from experimentation to enterprise-scale deployment, governance is thus now a critical driver of sustainable growth, providing the trust, clarity and accountability needed to scale intelligent systems responsibly. By embedding responsibility, transparency and ethical oversight into the architecture of AI, organizations can unlock business value while strengthening public and stakeholder confidence. In an economy increasingly shaped by intelligent systems, governance is more than a safeguard; it is a strategic advantage. Effective AI governance is a comprehensive framework that bridges and combines strategies, policies and processes, connecting business ambition, ethical intent and operational execution into a coherent system, ensuring AI can be trusted and scaled responsibly. To foster trust in AI systems and confidently scale AI across your business, leaders must focus on these three pillars: At first glance, governance seems to be all about preventing harm, but that's only part of the story, albeit still vitally important. The business and customer value come from the power of AI governance to unlock sustainable growth. It can do this by improving customer engagement, opening new revenue streams and ensuring that AI initiatives are thoroughly vetted for safety and business impact. This dual focus - social and business value - pushes organizations higher on the AI value chain. Here are five prime focal points for balancing responsible AI with measurable business outcomes: This broad scope is why many leading organizations have established governance offices, review boards, safety councils and operational AI teams, as well as appointing chief AI officers. Rather than creating paperwork, the point is to translate policy into a glide path for effective action and repeated innovation. Once the principles, standards and teams are in place, the work of embedding AI governance across operations can begin. There are three primary milestones for comprehensive and actionable AI governance: Executives want speed, but they need a steering wheel. Governance is how you steer toward your goals, responsibly, sustainably and ethically with trust at the core. To get ahead and stay ahead with AI, organizations must build governance into their operating architecture before driving AI into their applications. That's how you move fast without breaking your business and it's how powerful AI becomes dependable, trustworthy and fair even as it transforms your business and our world.
[3]
The Case for Distributed A.I. Governance in an Era of Enterprise A.I.
As companies rush to adopt A.I., distributed governance offers a path to scale innovation responsibly without sacrificing control. It's no longer news that  A.I. is everywhere. Yet  while nearly all companies have adopted some form of  A.I., few have been able to translate that adoption into meaningful business value. The successful few have bridged the gap through distributed  A.I. governance, an approach that ensures that A.I. is integrated safely,  ethically and responsibly. Until companies strike the right balance between innovation and control, they will be stuck in a "no man's land" between adoption and value, where implementers and users alike are unsure how to proceed. Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters What has changed, and changed quickly, is the external environment in which A.I. is being deployed. In the past year alone, companies have faced a surge of regulatory scrutiny, shareholder questions and customer expectations around how A.I. systems are governed. The E.U.'s A.I. Act has moved from theory to enforcement roadmap, U.S. regulators have begun signaling that "algorithmic accountability" will be treated as a compliance issue rather than a best practice and enterprise buyers are increasingly asking vendors to explain how their models are monitored, audited and controlled. In this environment, governance has become a gating factor for scaling A.I. at all. Companies that cannot demonstrate clear ownership, escalation paths and guardrails are finding that pilots stall, procurement cycles drag and promising initiatives quietly die on the vine. The state of play: two common approaches to applying A.I. at scale While I'm currently a professor and the associate director of the Institute for Applied Artificial Intelligence (IAAI) at the Kogod School of Business, my "prior life" was in building pre-IPO SaaS companies, and I remain deeply embedded in that ecosystem. As a result, I've seen firsthand how companies attempt this balancing act and fall short. The most common pitfalls involve optimizing for one extreme: either A.I. innovation at all costs, or total, centralized control. Although both approaches are typically well-intentioned, neither achieves a sustainable equilibrium. Companies that prioritize A.I. innovation tend to foster a culture of rapid experimentation. Without adequate governance, however, these efforts often become fragmented and risky. The absence of clear checks and balances can lead to data leaks, model drift -- where models become less accurate as new patterns emerge -- and ethical blind spots that expose organizations to litigation while eroding brand trust.  Take, for example,  Air Canada's decision to launch an A.I. chatbot on its website to answer customer questions. While the idea itself was forward-thinking, the lack of appropriate oversight and strategic guardrails ultimately made the initiative far more costly than anticipated. What might have been a contained operational error instead became a governance failure that highlighted how even narrow A.I. deployments can have outsized downstream consequences when ownership and accountability are unclear. On the other end of the spectrum are companies that  prioritize centralized control over innovation in an effort to minimize or eliminate A.I.-related risk. To do so, they often create a singular A.I.-focused team or department through which all A.I. initiatives are routed.  Not only does this centralized approach concentrate governance responsibility among a select few -- leaving the broader organization disengaged at best, or  wholly unaware at worst -- but also  creates bottlenecks, slows approvals and stifles innovation. Entrepreneurial teams frustrated by bureaucratic red tape will seek alternatives, giving rise to shadow A.I.: employees bringing their own A.I. tools to the workplace without oversight. This is just one byproduct that  ironically introduces more risk. A high-profile example occurred at Samsung in 2023, when multiple employees in the semiconductor division unintentionally leaked sensitive information while using ChatGPT to troubleshoot source code. What makes shadow A.I. particularly difficult to manage today is the speed at which these tools evolve. Employees are no longer just pasting text or code into chatbots. They are now building automations, connecting A.I. agents to internal data sources and sharing prompts across teams. Without distributed governance, these informal systems can become deeply embedded in work before leadership even knows they exist. The main takeaway: when companies pursue total control over tech-enabled functions, they run the risk of  causing the very security risks their approach is designed to avoid. Moving from A.I. adoption to A.I. value Too often, governance is treated as an organizational chart problem. But A.I. systems behave differently from traditional enterprise software. They evolve over time, interact unpredictably with new data and are shaped as much by human use as technical design. Because neither extreme -- unchecked innovation nor rigid control -- works, companies have to reconsider A.I. governance as a cultural challenge, not just a technical one. The solution lies in building a distributed A.I. governance system grounded in three essentials: culture, process and data. Together, these pillars enable both shared responsibility and support systems for change, bridging the gap between using A.I. for its own sake and generating real return on investment by applying A.I. to novel problems. Culture and wayfinding: crafting an A.I. charter A successful distributed A.I. governance system depends on cultivating a strong organizational culture around A.I. One relevant example can be found in  Spotify's model of decentralized autonomy. While this approach may not translate directly to every organization, the larger lesson is universal: companies need to build a culture of expectations around A.I. that is authentic to their teams and aligned with their strategic objectives. An effective way to establish this culture is through a clearly defined and operationalized A.I. Charter: a living document that evolves alongside an organization's  A.I. advancements and strategic vision. The Charter serves as both a North Star and a set of cultural boundaries, articulating the organization's goals for A.I. while specifying how  A.I. will, and will not, be used. Importantly, the Charter should not live on an internal wiki, disconnected from day-to-day work. Leading organizations treat it as input to product reviews, vendor selection and even performance dialogue. When teams can point to the Charter to justify not pursuing a use case, or to escalate concerns early, it becomes a tool for speed, not friction. A well-designed A.I. Charter will address two core elements: the company's objectives for adopting A.I. and its non-negotiable values for ethical and responsible use. Clearly outlining the purpose of A.I. initiatives and the limits of acceptable practices creates alignment across the workforce and sets expectations for behavior. Embedding the A.I. Charter into key objectives and other goal-oriented measures allows employees to translate A.I. theory  into  everyday practice -- fostering shared ownership of governance norms and building resilience as the A.I. landscape evolves. Business process analysis to mark and measure Distributed  A.I.  governance system must also be anchored in rigorous business process analysis. Every A.I. initiative, whether enhancing an existing workflow or creating an entirely new one, should begin by mapping the current process. This foundational step makes risks visible, uncovers upstream and downstream dependencies that may amplify those risks, and builds a shared understanding of how  A.I. interventions cascade across the organization. By visualizing these interdependencies, teams gain both clarity and accountability. When employees understand the full impact chain and existing risk profile, they are better equipped to make informed decisions about where A.I. should or should not be deployed. This approach also enables teams define the value proposition of their A.I. initiatives, ensuring that benefits meaningfully outweigh potential risks. Embedding these governance protocols directly into process design, rather than layering them on retroactively, allows teams to innovate responsibly without creating bottlenecks. In this way, business process analysis transforms governance from an external constraint into an integrated, scalable decision-making framework that drives both control and creativity. Strong data governance equals effective A.I. governance Effective A.I. governance ultimately depends on strong data governance. The familiar adage "garbage in, garbage out" is only amplified with A.I.  systems, where low-quality or biased data can amplify risks and undermine business value at scale. While centralized data teams may manage the  technical infrastructure, every function that touches A.I. must be accountable for ensuring data quality, validating model outputs and regularly auditing drift or bias in their A.I. solutions. This distributed approach is also what positions companies to respond to regulatory inquiries and audits with confidence. When data lineage, model assumptions and validation practices are documented at the point of use, organizations can demonstrate responsible stewardship without scrambling to retrofit controls. When data governance is embedded throughout the company, A.I. delivers consistent, explainable value rather than exposing and magnifying hidden weaknesses. Why the effort is worth it Distributed A.I.  governance  represents the sweet spot for scaling and sustaining  A.I.-driven value. As A.I. continues to be embedded in core business functions, the question evolves from whether companies will use A.I. to whether they can govern it at the pace their strategies demand. In this way, distributed A.I. governance becomes an operating model designed for systems that learn, adapt and scale. These systems help yield the benefits of speed -- traditionally seen in innovation-first institutions -- while maintaining the integrity and risk management of centralized control oversight. And while building a workable system might seem daunting, it is ultimately the most effective way to achieve value at scale in a business environment that will only grow more deeply integrated with A.I. Organizations that embrace it will move faster precisely because they are in control, not in spite of it.
Share
Share
Copy Link
A new survey reveals 41% of organizations use agentic AI in daily operations, yet only 27% have mature governance frameworks to oversee these autonomous systems. This mismatch creates significant risk as artificial intelligence makes decisions without human guidance, raising urgent questions about accountability and who intervenes when AI systems fail.
Businesses are moving rapidly to integrate agentic AI into their operations, but AI governance structures are struggling to keep up with the pace of deployment. A survey conducted by Drexel University's LeBow College of Business, which polled more than 500 data professionals, found that 41% of organizations are already using agentic AI in their daily operations
1
. These aren't experimental pilots—they're embedded in regular workflows where artificial intelligence systems operate without human guidance. Yet only 27% of organizations report having governance frameworks mature enough to monitor and manage these autonomous systems effectively1
. This gap between adoption and oversight represents a major source of risk, but also a significant business opportunity for organizations that get it right.
Source: The Conversation
The mismatch between deployment speed and governance maturity creates a fundamental accountability problem. When autonomous systems act in real-world situations, responsibility becomes difficult to trace. Financial services firms, for instance, increasingly deploy fraud detection systems that block suspicious activity in real time before any human intervention occurs
1
. Customers often discover this only when their cards are declined. A recent incident in San Francisco illustrated these risks vividly: during a power outage, autonomous robotaxis became stuck at intersections, blocking emergency vehicles and creating confusion for other drivers1
. Even when systems behave as designed, unexpected conditions can produce undesirable outcomes. The critical question becomes: who is responsible when something goes wrong, and who has the authority to intervene?In many companies, humans remain technically "in the loop," but their involvement happens only after autonomous systems have already acted. People typically enter the process when problems become visible—a price appears incorrect, a transaction gets flagged, or a customer complains
1
. By that point, decisions have been made and human review becomes corrective rather than supervisory. This late intervention may limit damage from individual decisions, but it rarely clarifies accountability. Research on human-AI collaboration shows that problems emerge when organizations fail to define clearly how people and autonomous systems should work together1
. Without governance designed upfront, people function as a safety valve rather than as accountable decision-makers, and trust gradually erodes.Contrary to the perception that governance slows innovation, effective AI governance is emerging as a critical driver of sustainable growth. Business leaders often view governance as an obstacle that gives competitors an advantage
2
. The reality is different: governance provides the traction needed for acceleration while keeping organizations on course. Clear accountability, transparency, fairness and integrity must be built into everyday workflows, system design and decision-making rather than left as policy statements2
. Organizations with stronger governance frameworks are significantly more likely to turn early gains into long-term results, including greater efficiency and revenue growth1
. The key difference isn't ambition or technical skills—it's preparedness. Without proper governance, AI initiatives fragment into data silos, incomplete processes, inadequate monitoring, undefined roles and inefficient resource use2
.As companies face mounting regulatory scrutiny and customer expectations, a new approach is gaining traction: distributed A.I. governance. While nearly all companies have adopted some form of artificial intelligence, few have translated that adoption into meaningful business value
3
. The successful ones have bridged this gap through distributed governance models that ensure AI is integrated safely, ethically and responsibly. The external environment has shifted dramatically—the EU A.I. Act has moved from theory to enforcement, U.S. regulators are treating algorithmic accountability as a compliance issue, and enterprise buyers increasingly demand explanations of how models are monitored and controlled3
. Companies that cannot demonstrate clear ownership, escalation paths and guardrails find that pilots stall and promising initiatives quietly fail.Related Stories
Organizations typically fall into one of two traps when implementing AI at scale. Those prioritizing innovation at all costs foster rapid experimentation but without adequate governance, these efforts become fragmented and risky. The absence of checks and balances can lead to data leaks, model drift, and ethical blind spots that expose organizations to litigation while eroding brand trust
3
. Air Canada's experience with an AI chatbot illustrates this risk: what began as a forward-thinking initiative became far more costly than anticipated due to lack of oversight and strategic guardrails3
. Even narrow AI deployments can have outsized consequences when ownership and accountability remain unclear.On the opposite extreme, companies that prioritize centralized control create bottlenecks that slow approvals and stifle innovation. This approach concentrates governance responsibility among a select few, leaving the broader organization disengaged or unaware
3
. Frustrated by bureaucratic red tape, entrepreneurial teams seek alternatives, giving rise to shadow A.I.—employees bringing their own AI tools to work without oversight. A notable incident occurred at Samsung in 2023 when semiconductor division employees unintentionally leaked sensitive information while using ChatGPT to troubleshoot source code3
. Today's shadow AI is particularly difficult to manage because employees aren't just pasting text into chatbots—they're building automations, connecting AI agents to internal data sources, and sharing prompts across teams. Without distributed governance, these informal systems become deeply embedded before leadership knows they exist.Effective AI governance creates a comprehensive framework connecting business ambition, ethical intent and operational execution into a coherent system that enables responsible scaling of AI
2
. This dual focus on social and business value helps organizations unlock business value by improving customer engagement, opening new revenue streams, and ensuring AI initiatives are thoroughly vetted for safety and impact. Leading organizations have established governance offices, review boards, safety councils and operational AI teams, appointing chief AI officers to translate policy into effective action and repeated innovation2
. International guidance from the OECD emphasizes that accountability and human oversight need to be designed into AI systems from the start, not added later1
. Rather than limiting autonomy, good governance makes it workable by clarifying who owns decisions, how system function is monitored, and when people should intervene. In an economy increasingly shaped by intelligent systems, governance isn't just a safeguard—it's a strategic advantage that determines which organizations move ahead and which get stuck between adoption and value creation.
Source: Observer
Summarized by
Navi
[1]
17 Sept 2025•Technology

08 Dec 2025•Technology

19 Jun 2025•Technology

1
Policy and Regulation

2
Technology

3
Technology
