2 Sources
[1]
Riding the AI current: why leaders are letting it flow
Would you let AI make lifestyle decisions for you? Statistically, the two people sitting next to you would. Sponsored feature How central is AI to your existence? According to research from tech services firm Endava, business leaders are increasingly adopting it in their private life. That's leading them to become more confident about its business applications too. Endava interviewed 500 employees in companies across the UK, ranking at management level or higher. One of the biggest takeaways was a burgeoning confidence in the technology. In a perhaps surprising willingness to hand over responsibility, two thirds of business leaders would trust an entirely automated AI to make their lifestyle decisions for them. Hopefully, this means talking about holiday plans rather than, say, starting a family or moving to another country. The same proportion believe that access to AI is just as fundamental to society as access to basic utilities such as electricity and water. "The fact that 66 percent of business leaders say they use it in their private life is a positive sign. You get to see what the benefits are. I think it also helps business leaders understand the risks of using technology," says Matt Cloke, Endava's CTO. The research, though, uncovered something of a paradox: while business leaders are happy to let AI make decisions for them in their private life, they're far less confident that they're introducing the right AI tools at work. Adopting AI is respondents' number one business strategy, ahead of introducing other tech and upskilling the workforce. Yet nearly half reckon their organization isn't investing in the right AI technologies to drive meaningful business value. Despite this, though, half of C-suite respondents expect their company to be at an advanced stage of its AI transformation in two years' time. However, this confidence isn't mirrored at lower levels. Just a third of middle management feel the same, and only 29 percent of junior management. Fear is likely driving this disparity. Workers are often told that AI will make them more efficient, but can't see how. Instead, more junior staff are often concerned that AI will replace rather than help them. The answer, says Cloke, is to make sure that senior executives are walking the walk themselves, while demonstrating how AI can actually make people's jobs easier. "If your C-suite is committed to deploying an AI strategy, they have to model the behavior that they expect to see within their organization," he says. "Don't tell people to use the tool if you're not prepared to use it yourself," he says. As a partner of OpenAI, Endava was an early adopter of ChatGPT Enterprise across the business, and created a cohort of trained 'champions' within the company. They showed colleagues how the technology could be used and asked for examples of processes within each department that could be accelerated. Then they recommended an AI tool or created a custom GPT that could help. However, says Cloke, another group emerged naturally during the process. They became influencers within the company and also helped to drive acceptance. "These were people that, once given the tool, came up with a way of using it in a way that no one really told them how to do," he says. "They were just naturally gifted at being able to use that particular technology to solve a problem, so what we've been doing is helping show what is possible because these 'AI heroes' have done it." The UK is a leader in AI adoption, consistently ranking highly in terms of AI readiness. AI is most widely used in financial businesses such as wealth management and payments, but is consistently making its way into all sectors of the economy. Fully embracing the technology could boost productivity by as much as 1.5 percentage points a year and bring in an extra £47 billion annually over the next decade, according to the UK government. However, with the International Energy Agency (IEA) predicting datacenter energy demands will double by 2030 in its base case scenario, business leaders are concerned about the ability of UK infrastructure to cope with the increasing demands of AI. "It is important that the UK can have access to datacenters and even a large language model which is independent of other providers , " said Cloke. This is a worry shared by the government itself, which earlier this year announced plans to create dedicated AI Growth Zones to speed up the planning process for AI infrastructure. Most decision-makers believe that the government is doing all it can to drive AI in the UK, and six in ten believe that the UK leads the world in AI. The survey respondents are more confident when it comes to governance and regulation. Virtually all consider it important to have some sort of independent global organization or governing body in charge of creating common policy around AI. More than nine in ten want to see the UK government take the lead here. An international governing body might sound like an impossible aim, but such a governance mechanism needn't involve specific regulations covering the way AI technologies are designed. It wouldn't need to be a global version of the EU's AI Act. Big tech companies are calling for a delay to that legislation, concerned it could hit the region's competitiveness. There has been talk of such a governing body, with the UN last year producing a framework for global AI governance, Governing AI for Humanity. It recommends the creation of an international scientific panel on AI, with dialog on best practices and common understandings on AI governance measures. It would include an AI standards exchange, a capacity development network, and a practical framework for AI training data governance and use. There would also be a global fund for AI. "Regulation of nuclear power doesn't regulate its design. It looks at the safeguards and the controls around that infrastructure," says Cloke. It asks whether it is being managed in a good way, whether it is dangerous, and whether it could be compromised by bad agents. "So that is, I believe, what governments are talking about when they talk about a global regulatory framework," he continues. "I don't believe that what people are looking at is a UN AI Act because I think people know that the world wouldn't sign up to it." The good news is that nearly seven in ten respondents reckon their implementation of AI has already helped to increase their profits, with only 12 percent disagreeing. Most, though, are concerned that if their organization fails to make significant progress with AI, it will be losing market share within two years. Almost a quarter think that could happen within just one year. The most pervading issue is getting value for money from an AI investment. Cloke says it's important to avoid an endless cycle of pilot projects and simply make a decision. "The emerging evidence (if you like, the Moore's law of large language models) is that their performance doubles every seven months," he says. That means the longer you delay making a decision around which AI you're going to use, the further ahead of you your AI-using peers will be. "So if you do get stuck in that cycle of endless proofs of concept, all you're really doing is committing yourself to having an even harder job catching up with your peers when you do decide to start using AI technology," he continues. Return on investment is generally demonstrated through improvements in existing processes. But this isn't always the full story. As well as improving the efficiency of existing tasks, AI can often bring totally new capabilities. "By giving someone the ability to load a spreadsheet into an AI tool, all of a sudden, rather than having to learn the skills of pivot tables and database queries and everything else, they can now have a natural language conversation with an excel spreadsheet," says Cloke.
[2]
The looming crisis of AI speed without guardrails
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now OpenAI's GPT-5 has arrived, bringing faster performance, more dependable reasoning and stronger tool use. It joins Claude Opus 4.1 and other frontier models in signaling a rapidly advancing cognitive frontier. While artificial general intelligence (AGI) remains in the future, DeepMind's Demis Hassabis has described this era as "10 times bigger than the Industrial Revolution, and maybe 10 times faster." According to OpenAI CEO Sam Altman, GPT-5 is "a significant fraction of the way to something very AGI-like." What is unfolding is not just a shift in tools, but a reordering of personal value, purpose, meaning and institutional trust. The challenge ahead is not only to innovate, but to build the moral, civic and institutional frameworks necessary to absorb this acceleration without collapse. Transformation without readiness Anthropic CEO Dario Amodei provided an expansive view in his 2024 essay Machines of Loving Grace. He imagined AI compressing a century of human progress into a decade, with commensurate advances in health, economic development, mental well-being and even democratic governance. However, "it will not be achieved without a huge amount of effort and struggle by many brave and dedicated people." He added that everyone "will need to do their part both to prevent [AI] risks and to fully realize the benefits." That is the fragile fulcrum on which these promises rest. Our AI-fueled future is near, even as the destination of this cognitive migration, which is nothing less than a reorientation of human purpose in a world of thinking machines, remains uncertain. While my earlier articles mapped where people and institutions must migrate, this one asks how we match acceleration with capacity. What this moment in time asks of us is not just technical adoption but cultural and social reinvention. That is a hard ask, as our governance, educational systems and civic norms were forged in a slower, more linear era. They moved with the gravity of precedent, not the velocity of code. Empowerment without inclusion In a New Yorker essay, Dartmouth professor Dan Rockmore describes how a neuroscientist colleague on a long drive conversed with ChatGPT and, together, they brainstormed a possible solution to a problem in his research. ChatGPT suggested he investigate a technique called "disentanglement" to simplify his mathematical model. The bot then wrote some code that was waiting at the end of the drive. The researcher ran it, and it worked. He said of this experience: "I feel like I'm accelerating with less time, I'm accelerating my learning, and improving my creativity, and I'm enjoying my work in a way I haven't in a while." This is a compelling illustration of how powerful emerging AI technology can be in the hands of certain professionals. It is indeed an excellent thought partner and collaborator, ideal for a university professor or anyone tasked with developing innovative ideas. But what about the usefulness for and impact on others? Consider the logistics planners, procurement managers, and budget analysts whose roles risk displacement rather than enhancement. Without targeted retraining, robust social protections or institutional clarity, their futures could quickly move from uncertain to untenable. The result is a yawning gap between what our technologies enable and what our social institutions can support. That is where true fragility lies: Not in the AI tools themselves, but in the expectation that our existing systems can absorb the impact from them without fracture. Change without infrastructure Many have argued that some amount of societal disruption always occurs alongside a technological revolution, such as when wagon wheel manufacturers were displaced by the rise of the automobile. But these narratives quickly shift to the wonders of what came next. The Industrial Revolution, now remembered for its long-term gains, began with decades of upheaval, exploitation and institutional lag. Public health systems, labor protections and universal education were not designed in advance. They emerged later, often painfully, as reactions to harms already done. Charles Dickens' Oliver Twist, with its orphaned child laborers and brutal workhouses, captured the social dislocation of that era with haunting clarity. The book was not a critique of technology itself, but of a society unprepared for its consequences. If the AI revolution is, as Hassabis suggests, an order of magnitude greater in scope and speed of implementation than that earlier transformation, then our margin for error is commensurately narrower and the timeline for societal response more compressed. In that context, hope is at best an invitation to dialogue and, at worst, a soft response to hard and fast-arriving problems. Vision without pathways What are those responses? Despite the sweeping visions, there remains little consensus on how these ambitions will be integrated into the core functions of society. What does a "gentle singularity" look like in a hospital understaffed and underfunded? How do "machines of loving grace" support a public school system still struggling to provide basic literacy? How do these utopian aspirations square with predictions of 20% unemployment within five years? For all the talk of transformation, the mechanisms for wealth distribution, societal adaptation and business accountability remain vague at best. In many cases, AI is haphazardly arriving through unfettered market momentum. Language models are being embedded into government services, customer support, financial platforms and legal assistance tools, often without transparent review or meaningful public discourse and almost certainly without regulation. Even when these tools are helpful, their rollout bypasses the democratic and institutional channels that would otherwise confer trust. They arrive not through deliberation but as fait accompli, products of unregulated market momentum. It is no wonder then, that the result is not a coordinated march toward abundance, but a patchwork of adoption defined more by technical possibility than social preparedness. In this environment, power accrues not to those with the most wisdom or care, but to those who move fastest and scale widest. And as history has shown, speed without accountability rarely yields equitable outcomes. Leadership without safeguards For enterprise and technology leaders, the acceleration is not abstract; it is an operational crisis. As large-scale AI systems begin permeating workflows, customer touchpoints and internal decision-making, executives face a shrinking window in which to act. This is not only about preparing for AGI; it is about managing the systemic impact of powerful, ambient tools that already exceed the control structures of most organizations. In a 2025 Thomson Reuters C-Suite survey, more than 80% of respondents said their organizations are already utilizing AI solutions, yet only 31% provided training for gen AI. That mismatch reveals a deeper readiness gap. Retraining cannot be a one-time initiative. It must become a core capability. In parallel, leaders must move beyond AI adoption to establishing internal governance, including model versioning, bias audits, human-in-the-loop safeguards and scenario planning. Without these, the risks are not only regulatory but reputational and strategic. Many leaders speak of AI as a force for human augmentation rather than replacement. In theory, systems that enhance human capacity should enable more resilient and adaptive institutions. In practice, however, the pressure to cut costs, increase throughput, and chase scale often pushes enterprises toward automation instead. This may become particularly acute during the next economic downturn. Whether augmentation becomes the dominant paradigm or merely a talking point will be one of the defining choices of this era. Faith without foresight In a Guardian interview speaking about AI, Hassabis said: "...if we're given the time, I believe in human ingenuity. I think we'll get this right." Perhaps "if we're given the time" is the phrase doing the heavy lifting here. Estimates are that even more powerful AI will emerge over the next 5 to 10 years. This short timeframe is likely the moment when society must get it right. "Of course," he added, "we've got to make sure [the benefits and prosperity from powerful AI] gets distributed fairly, but that's more of a political question." Indeed. To get it right would require a historically unprecedented feat: To match exponential technological disruption with equally agile moral judgment, political clarity and institutional redesign. It is likely that no society, not even with hindsight, has ever achieved such a feat. We survived the Industrial Revolution, painfully, unevenly, and only with time. However, as Hassabis and Amodei have made clear, we do not have much time. To adapt systems of law, education, labor and governance for a world of ambient, scalable intelligence would demand coordinated action across governments, corporations and civil society. It would require foresight in a culture trained to reward short-term gains, and humility in a sector built on winner-take-all dynamics. Optimism is not misplaced, it is conditional on decisions we have shown little collective capacity to make. Delay without excuse It is tempting to believe we can accurately forecast the arc of the AI era, but history suggests otherwise. On the one hand, it is entirely plausible that the AI revolution will substantially improve life as we know it, with advances such as clean fusion energy, cures for the worst of our diseases and solutions to the climate crisis. But it could also lead to large-scale unemployment or underemployment, social upheaval and even greater income inequality. Perhaps it will lead to all of this, or none of it. The truth is, we simply do not know. On a "Plain English" podcast, host Derek Thompson spoke with Cal Newport, a professor of computer science at Georgetown University and the author of several books including "Deep Work." Addressing what we should be instructing our children to be prepared for the age of AI, Newport said: "We're still in an era of benchmarks. It's like early in the Industrial Revolution; we haven't replaced any of the looms yet. ... We will have much clearer answers in two years." In that ambiguity lies both peril and potential. If we are, as Newport suggests, only at the threshold, then now is the time to prepare. The future may not arrive all at once, but its contours are already forming. Whether AI becomes our greatest leap or deepest rupture depends not only on the models we build, but on the moral imagination and fortitude we bring to meet them. If socially harmful impacts from AI are expected within the next five to 10 years, we cannot wait for them to fully materialize before responding. Waiting could equate to negligence. Even so, human nature tends to delay big decisions until crises become undeniable. But by then, it is often too late to prevent the worst effects. Avoiding that with AI requires imminent investment in flexible regulatory frameworks, comprehensive retraining programs, equitable distribution of benefits and a robust social safety net. If we want AI's future to be one of abundance rather than disruption, we must design the structures now. The future will not wait. It will arrive with or without our guardrails. In a race to powerful AI, it is time to stop behaving as if we are still at the starting line.
Share
Copy Link
A comprehensive look at the rapid adoption of AI technologies by business leaders, highlighting both the enthusiasm and concerns surrounding its implementation in professional and personal spheres.
A recent study by tech services firm Endava has revealed a growing trend among business leaders to adopt artificial intelligence (AI) in their personal lives, leading to increased confidence in its professional applications. The survey of 500 UK-based employees at management level or higher found that two-thirds of respondents would trust an entirely automated AI to make lifestyle decisions for them 1.
Source: The Register
Matt Cloke, Endava's CTO, views this as a positive sign, stating, "You get to see what the benefits are. I think it also helps business leaders understand the risks of using technology" 1. This personal experience with AI appears to be translating into professional strategies, with AI adoption ranking as the top business priority for many organizations.
Despite the enthusiasm at the top, the research uncovered a significant disparity in confidence levels across different management tiers. While half of C-suite respondents expect their company to be at an advanced stage of AI transformation within two years, only a third of middle management and 29% of junior management share this optimism 1.
This gap in perception highlights the need for better communication and demonstration of AI's benefits across all levels of an organization. Cloke emphasizes the importance of leadership by example, stating, "Don't tell people to use the tool if you're not prepared to use it yourself" 1.
The UK is positioning itself as a leader in AI adoption, consistently ranking highly in terms of AI readiness. The technology is making inroads across various sectors, with financial services leading the way. The UK government estimates that fully embracing AI could boost productivity by up to 1.5 percentage points annually and contribute an additional £47 billion to the economy over the next decade 1.
Source: VentureBeat
As AI adoption accelerates, concerns are emerging about the UK's infrastructure capacity to support the growing demands of AI technologies. The International Energy Agency predicts that datacenter energy demands could double by 2030 1. In response, the UK government has announced plans to create dedicated AI Growth Zones to streamline the planning process for AI infrastructure.
The rapid advancement of AI technologies has sparked discussions about the need for global governance. OpenAI's GPT-5 and other frontier models are pushing the boundaries of AI capabilities, with OpenAI CEO Sam Altman describing GPT-5 as "a significant fraction of the way to something very AGI-like" 2.
Nearly all survey respondents consider it important to have an independent global organization or governing body responsible for creating common AI policies. Over 90% want to see the UK government take a leading role in this effort 1. The United Nations has already produced a framework for global AI governance, titled "Governing AI for Humanity," which recommends the creation of an international scientific panel on AI and various mechanisms for dialogue and best practices 1.
While the potential benefits of AI are significant, experts warn of the need for careful management of its societal impact. Anthropic CEO Dario Amodei envisions AI compressing a century of human progress into a decade but cautions that realizing these benefits will require "a huge amount of effort and struggle by many brave and dedicated people" 2.
The rapid pace of AI development raises concerns about societal readiness and the potential for disruption. Critics argue that the mechanisms for wealth distribution, societal adaptation, and business accountability remain vague, even as AI is being integrated into various sectors without transparent review or meaningful public discourse 2.
Databricks raises $1 billion in a new funding round, valuing the company at over $100 billion. The data analytics firm plans to invest in AI database technology and an AI agent platform, positioning itself for growth in the evolving AI market.
12 Sources
Business
1 day ago
12 Sources
Business
1 day ago
Microsoft has integrated a new AI-powered COPILOT function into Excel, allowing users to perform complex data analysis and content generation using natural language prompts within spreadsheet cells.
9 Sources
Technology
1 day ago
9 Sources
Technology
1 day ago
Adobe launches Acrobat Studio, integrating AI assistants and PDF Spaces to transform document management and collaboration, marking a significant evolution in PDF technology.
10 Sources
Technology
1 day ago
10 Sources
Technology
1 day ago
Meta rolls out an AI-driven voice translation feature for Facebook and Instagram creators, enabling automatic dubbing of content from English to Spanish and vice versa, with plans for future language expansions.
5 Sources
Technology
16 hrs ago
5 Sources
Technology
16 hrs ago
Nvidia introduces significant updates to its app, including global DLSS override, Smooth Motion for RTX 40-series GPUs, and improved AI assistant, enhancing gaming performance and user experience.
4 Sources
Technology
1 day ago
4 Sources
Technology
1 day ago