5 Sources
5 Sources
[1]
Riding the AI current: why leaders are letting it flow
Would you let AI make lifestyle decisions for you? Statistically, the two people sitting next to you would. Sponsored feature How central is AI to your existence? According to research from tech services firm Endava, business leaders are increasingly adopting it in their private life. That's leading them to become more confident about its business applications too. Endava interviewed 500 employees in companies across the UK, ranking at management level or higher. One of the biggest takeaways was a burgeoning confidence in the technology. In a perhaps surprising willingness to hand over responsibility, two thirds of business leaders would trust an entirely automated AI to make their lifestyle decisions for them. Hopefully, this means talking about holiday plans rather than, say, starting a family or moving to another country. The same proportion believe that access to AI is just as fundamental to society as access to basic utilities such as electricity and water. "The fact that 66 percent of business leaders say they use it in their private life is a positive sign. You get to see what the benefits are. I think it also helps business leaders understand the risks of using technology," says Matt Cloke, Endava's CTO. The research, though, uncovered something of a paradox: while business leaders are happy to let AI make decisions for them in their private life, they're far less confident that they're introducing the right AI tools at work. Adopting AI is respondents' number one business strategy, ahead of introducing other tech and upskilling the workforce. Yet nearly half reckon their organization isn't investing in the right AI technologies to drive meaningful business value. Despite this, though, half of C-suite respondents expect their company to be at an advanced stage of its AI transformation in two years' time. However, this confidence isn't mirrored at lower levels. Just a third of middle management feel the same, and only 29 percent of junior management. Fear is likely driving this disparity. Workers are often told that AI will make them more efficient, but can't see how. Instead, more junior staff are often concerned that AI will replace rather than help them. The answer, says Cloke, is to make sure that senior executives are walking the walk themselves, while demonstrating how AI can actually make people's jobs easier. "If your C-suite is committed to deploying an AI strategy, they have to model the behavior that they expect to see within their organization," he says. "Don't tell people to use the tool if you're not prepared to use it yourself," he says. As a partner of OpenAI, Endava was an early adopter of ChatGPT Enterprise across the business, and created a cohort of trained 'champions' within the company. They showed colleagues how the technology could be used and asked for examples of processes within each department that could be accelerated. Then they recommended an AI tool or created a custom GPT that could help. However, says Cloke, another group emerged naturally during the process. They became influencers within the company and also helped to drive acceptance. "These were people that, once given the tool, came up with a way of using it in a way that no one really told them how to do," he says. "They were just naturally gifted at being able to use that particular technology to solve a problem, so what we've been doing is helping show what is possible because these 'AI heroes' have done it." The UK is a leader in AI adoption, consistently ranking highly in terms of AI readiness. AI is most widely used in financial businesses such as wealth management and payments, but is consistently making its way into all sectors of the economy. Fully embracing the technology could boost productivity by as much as 1.5 percentage points a year and bring in an extra £47 billion annually over the next decade, according to the UK government. However, with the International Energy Agency (IEA) predicting datacenter energy demands will double by 2030 in its base case scenario, business leaders are concerned about the ability of UK infrastructure to cope with the increasing demands of AI. "It is important that the UK can have access to datacenters and even a large language model which is independent of other providers , " said Cloke. This is a worry shared by the government itself, which earlier this year announced plans to create dedicated AI Growth Zones to speed up the planning process for AI infrastructure. Most decision-makers believe that the government is doing all it can to drive AI in the UK, and six in ten believe that the UK leads the world in AI. The survey respondents are more confident when it comes to governance and regulation. Virtually all consider it important to have some sort of independent global organization or governing body in charge of creating common policy around AI. More than nine in ten want to see the UK government take the lead here. An international governing body might sound like an impossible aim, but such a governance mechanism needn't involve specific regulations covering the way AI technologies are designed. It wouldn't need to be a global version of the EU's AI Act. Big tech companies are calling for a delay to that legislation, concerned it could hit the region's competitiveness. There has been talk of such a governing body, with the UN last year producing a framework for global AI governance, Governing AI for Humanity. It recommends the creation of an international scientific panel on AI, with dialog on best practices and common understandings on AI governance measures. It would include an AI standards exchange, a capacity development network, and a practical framework for AI training data governance and use. There would also be a global fund for AI. "Regulation of nuclear power doesn't regulate its design. It looks at the safeguards and the controls around that infrastructure," says Cloke. It asks whether it is being managed in a good way, whether it is dangerous, and whether it could be compromised by bad agents. "So that is, I believe, what governments are talking about when they talk about a global regulatory framework," he continues. "I don't believe that what people are looking at is a UN AI Act because I think people know that the world wouldn't sign up to it." The good news is that nearly seven in ten respondents reckon their implementation of AI has already helped to increase their profits, with only 12 percent disagreeing. Most, though, are concerned that if their organization fails to make significant progress with AI, it will be losing market share within two years. Almost a quarter think that could happen within just one year. The most pervading issue is getting value for money from an AI investment. Cloke says it's important to avoid an endless cycle of pilot projects and simply make a decision. "The emerging evidence (if you like, the Moore's law of large language models) is that their performance doubles every seven months," he says. That means the longer you delay making a decision around which AI you're going to use, the further ahead of you your AI-using peers will be. "So if you do get stuck in that cycle of endless proofs of concept, all you're really doing is committing yourself to having an even harder job catching up with your peers when you do decide to start using AI technology," he continues. Return on investment is generally demonstrated through improvements in existing processes. But this isn't always the full story. As well as improving the efficiency of existing tasks, AI can often bring totally new capabilities. "By giving someone the ability to load a spreadsheet into an AI tool, all of a sudden, rather than having to learn the skills of pivot tables and database queries and everything else, they can now have a natural language conversation with an excel spreadsheet," says Cloke.
[2]
MIT report misunderstood: Shadow AI economy booms while headlines cry failure
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now The most widely cited statistic from a new MIT report has been deeply misunderstood. While headlines trumpet that "95% of generative AI pilots at companies are failing," the report actually reveals something far more remarkable: the fastest and most successful enterprise technology adoption in corporate history is happening right under executives' noses. The study, released this week by MIT's Project NANDA, has sparked anxiety across social media and business circles, with many interpreting it as evidence that artificial intelligence is failing to deliver on its promises. But a closer reading of the 26-page report tells a starkly different story -- one of unprecedented grassroots technology adoption that has quietly revolutionized work while corporate initiatives stumble. The researchers found that 90% of employees regularly use personal AI tools for work, even though only 40% of their companies have official AI subscriptions. "While only 40% of companies say they purchased an official LLM subscription, workers from over 90% of the companies we surveyed reported regular use of personal AI tools for work tasks," the study explains. "In fact, almost every single person used an LLM in some form for their work." How employees cracked the AI code while executives stumbled The MIT researchers discovered what they call a "shadow AI economy" where workers use personal ChatGPT accounts, Claude subscriptions and other consumer tools to handle significant portions of their jobs. These employees aren't just experimenting -- they're using AI "multiples times a day every day of their weekly workload," the study found. This underground adoption has outpaced the early spread of email, smartphones, and cloud computing in corporate environments. A corporate lawyer quoted in the MIT report exemplified the pattern: Her organization invested $50,000 in a specialized AI contract analysis tool, yet she consistently used ChatGPT for drafting work because "the fundamental quality difference is noticeable. ChatGPT consistently produces better outputs, even though our vendor claims to use the same underlying technology." The pattern repeats across industries. Corporate systems get described as "brittle, overengineered, or misaligned with actual workflows," while consumer AI tools win praise for "flexibility, familiarity, and immediate utility." As one chief information officer told researchers: "We've seen dozens of demos this year. Maybe one or two are genuinely useful. The rest are wrappers or science projects." Why $50,000 enterprise tools lose to $20 consumer apps The 95% failure rate that has dominated headlines applies specifically to custom enterprise AI solutions -- the expensive, bespoke systems companies commission from vendors or build internally. These tools fail because they lack what the MIT researchers call "learning capability." Most corporate AI systems "do not retain feedback, adapt to context, or improve over time," the study found. Users complained that enterprise tools "don't learn from our feedback" and require "too much manual context required each time." Consumer tools like ChatGPT succeed because they feel responsive and flexible, even though they reset with each conversation. Enterprise tools feel rigid and static, requiring extensive setup for each use. The learning gap creates a strange hierarchy in user preferences. For quick tasks like emails and basic analysis, 70% of workers prefer AI over human colleagues. But for complex, high-stakes work, 90% still want humans. The dividing line isn't intelligence -- it's memory and adaptability. The hidden billion-dollar productivity boom happening under IT's radar Far from showing AI failure, the shadow economy reveals massive productivity gains that don't appear in corporate metrics. Workers have solved integration challenges that stymie official initiatives, proving AI works when implemented correctly. "This shadow economy demonstrates that individuals can successfully cross the GenAI Divide when given access to flexible, responsive tools," the report explains. Some companies have started paying attention: "Forward-thinking organizations are beginning to bridge this gap by learning from shadow usage and analyzing which personal tools deliver value before procuring enterprise alternatives." The productivity gains are real and measurable, just hidden from traditional corporate accounting. Workers automate routine tasks, accelerate research, and streamline communication -- all while their companies' official AI budgets produce little return. Why buying beats building: external partnerships succeed twice as often Another finding challenges conventional tech wisdom: companies should stop trying to build AI internally. External partnerships with AI vendors reached deployment 67% of the time, compared to 33% for internally built tools. The most successful implementations came from organizations that "treated AI startups less like software vendors and more like business service providers," holding them to operational outcomes rather than technical benchmarks. These companies demanded deep customization and continuous improvement rather than flashy demos. "Despite conventional wisdom that enterprises resist training AI systems, most teams in our interviews expressed willingness to do so, provided the benefits were clear and guardrails were in place," the researchers found. The key was partnership, not just purchasing. Seven industries avoiding disruption are actually being smart The MIT report found that only technology and media sectors show meaningful structural change from AI, while seven major industries -- including healthcare, finance, and manufacturing -- show "significant pilot activity but little to no structural change." This measured approach isn't a failure -- it's wisdom. Industries avoiding disruption are being thoughtful about implementation rather than rushing into chaotic change. In healthcare and energy, "most executives report no current or anticipated hiring reductions over the next five years." Technology and media move faster because they can absorb more risk. More than 80% of executives in these sectors anticipate reduced hiring within 24 months. Other industries are proving that successful AI adoption doesn't require dramatic upheaval. Back-office automation delivers millions while front-office tools grab headlines Corporate attention flows heavily toward sales and marketing applications, which captured about 50% of AI budgets. But the highest returns come from unglamorous back-office automation that receives little attention. "Some of the most dramatic cost savings we documented came from back-office automation," the researchers found. Companies saved $2-10 million annually in customer service and document processing by eliminating business process outsourcing contracts, and cut external creative costs by 30%. These gains came "without material workforce reduction," the study notes. "Tools accelerated work, but did not change team structures or budgets. Instead, ROI emerged from reduced external spend, eliminating BPO contracts, cutting agency fees, and replacing expensive consultants with AI-powered internal capabilities." The AI revolution is succeeding -- one employee at a time The MIT findings don't show AI failing. They show AI succeeding so well that employees have moved ahead of their employers. The technology works; corporate procurement doesn't. The researchers identified organizations "crossing the GenAI Divide" by focusing on tools that integrate deeply while adapting over time. "The shift from building to buying, combined with the rise of prosumer adoption and the emergence of agentic capabilities, creates unprecedented opportunities for vendors who can deliver learning-capable, deeply integrated AI systems." The 95% of enterprise AI pilots that fail point toward a solution: learn from the 90% of workers who have already figured out how to make AI work. As one manufacturing executive told researchers: "We're processing some contracts faster, but that's all that has changed." That executive missed the bigger picture. Processing contracts faster -- multiplied across millions of workers and thousands of daily tasks -- is exactly the kind of gradual, sustainable productivity improvement that defines successful technology adoption. The AI revolution isn't failing. It's quietly succeeding, one ChatGPT conversation at a time.
[3]
The looming crisis of AI speed without guardrails
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now OpenAI's GPT-5 has arrived, bringing faster performance, more dependable reasoning and stronger tool use. It joins Claude Opus 4.1 and other frontier models in signaling a rapidly advancing cognitive frontier. While artificial general intelligence (AGI) remains in the future, DeepMind's Demis Hassabis has described this era as "10 times bigger than the Industrial Revolution, and maybe 10 times faster." According to OpenAI CEO Sam Altman, GPT-5 is "a significant fraction of the way to something very AGI-like." What is unfolding is not just a shift in tools, but a reordering of personal value, purpose, meaning and institutional trust. The challenge ahead is not only to innovate, but to build the moral, civic and institutional frameworks necessary to absorb this acceleration without collapse. Transformation without readiness Anthropic CEO Dario Amodei provided an expansive view in his 2024 essay Machines of Loving Grace. He imagined AI compressing a century of human progress into a decade, with commensurate advances in health, economic development, mental well-being and even democratic governance. However, "it will not be achieved without a huge amount of effort and struggle by many brave and dedicated people." He added that everyone "will need to do their part both to prevent [AI] risks and to fully realize the benefits." That is the fragile fulcrum on which these promises rest. Our AI-fueled future is near, even as the destination of this cognitive migration, which is nothing less than a reorientation of human purpose in a world of thinking machines, remains uncertain. While my earlier articles mapped where people and institutions must migrate, this one asks how we match acceleration with capacity. What this moment in time asks of us is not just technical adoption but cultural and social reinvention. That is a hard ask, as our governance, educational systems and civic norms were forged in a slower, more linear era. They moved with the gravity of precedent, not the velocity of code. Empowerment without inclusion In a New Yorker essay, Dartmouth professor Dan Rockmore describes how a neuroscientist colleague on a long drive conversed with ChatGPT and, together, they brainstormed a possible solution to a problem in his research. ChatGPT suggested he investigate a technique called "disentanglement" to simplify his mathematical model. The bot then wrote some code that was waiting at the end of the drive. The researcher ran it, and it worked. He said of this experience: "I feel like I'm accelerating with less time, I'm accelerating my learning, and improving my creativity, and I'm enjoying my work in a way I haven't in a while." This is a compelling illustration of how powerful emerging AI technology can be in the hands of certain professionals. It is indeed an excellent thought partner and collaborator, ideal for a university professor or anyone tasked with developing innovative ideas. But what about the usefulness for and impact on others? Consider the logistics planners, procurement managers, and budget analysts whose roles risk displacement rather than enhancement. Without targeted retraining, robust social protections or institutional clarity, their futures could quickly move from uncertain to untenable. The result is a yawning gap between what our technologies enable and what our social institutions can support. That is where true fragility lies: Not in the AI tools themselves, but in the expectation that our existing systems can absorb the impact from them without fracture. Change without infrastructure Many have argued that some amount of societal disruption always occurs alongside a technological revolution, such as when wagon wheel manufacturers were displaced by the rise of the automobile. But these narratives quickly shift to the wonders of what came next. The Industrial Revolution, now remembered for its long-term gains, began with decades of upheaval, exploitation and institutional lag. Public health systems, labor protections and universal education were not designed in advance. They emerged later, often painfully, as reactions to harms already done. Charles Dickens' Oliver Twist, with its orphaned child laborers and brutal workhouses, captured the social dislocation of that era with haunting clarity. The book was not a critique of technology itself, but of a society unprepared for its consequences. If the AI revolution is, as Hassabis suggests, an order of magnitude greater in scope and speed of implementation than that earlier transformation, then our margin for error is commensurately narrower and the timeline for societal response more compressed. In that context, hope is at best an invitation to dialogue and, at worst, a soft response to hard and fast-arriving problems. Vision without pathways What are those responses? Despite the sweeping visions, there remains little consensus on how these ambitions will be integrated into the core functions of society. What does a "gentle singularity" look like in a hospital understaffed and underfunded? How do "machines of loving grace" support a public school system still struggling to provide basic literacy? How do these utopian aspirations square with predictions of 20% unemployment within five years? For all the talk of transformation, the mechanisms for wealth distribution, societal adaptation and business accountability remain vague at best. In many cases, AI is haphazardly arriving through unfettered market momentum. Language models are being embedded into government services, customer support, financial platforms and legal assistance tools, often without transparent review or meaningful public discourse and almost certainly without regulation. Even when these tools are helpful, their rollout bypasses the democratic and institutional channels that would otherwise confer trust. They arrive not through deliberation but as fait accompli, products of unregulated market momentum. It is no wonder then, that the result is not a coordinated march toward abundance, but a patchwork of adoption defined more by technical possibility than social preparedness. In this environment, power accrues not to those with the most wisdom or care, but to those who move fastest and scale widest. And as history has shown, speed without accountability rarely yields equitable outcomes. Leadership without safeguards For enterprise and technology leaders, the acceleration is not abstract; it is an operational crisis. As large-scale AI systems begin permeating workflows, customer touchpoints and internal decision-making, executives face a shrinking window in which to act. This is not only about preparing for AGI; it is about managing the systemic impact of powerful, ambient tools that already exceed the control structures of most organizations. In a 2025 Thomson Reuters C-Suite survey, more than 80% of respondents said their organizations are already utilizing AI solutions, yet only 31% provided training for gen AI. That mismatch reveals a deeper readiness gap. Retraining cannot be a one-time initiative. It must become a core capability. In parallel, leaders must move beyond AI adoption to establishing internal governance, including model versioning, bias audits, human-in-the-loop safeguards and scenario planning. Without these, the risks are not only regulatory but reputational and strategic. Many leaders speak of AI as a force for human augmentation rather than replacement. In theory, systems that enhance human capacity should enable more resilient and adaptive institutions. In practice, however, the pressure to cut costs, increase throughput, and chase scale often pushes enterprises toward automation instead. This may become particularly acute during the next economic downturn. Whether augmentation becomes the dominant paradigm or merely a talking point will be one of the defining choices of this era. Faith without foresight In a Guardian interview speaking about AI, Hassabis said: "...if we're given the time, I believe in human ingenuity. I think we'll get this right." Perhaps "if we're given the time" is the phrase doing the heavy lifting here. Estimates are that even more powerful AI will emerge over the next 5 to 10 years. This short timeframe is likely the moment when society must get it right. "Of course," he added, "we've got to make sure [the benefits and prosperity from powerful AI] gets distributed fairly, but that's more of a political question." Indeed. To get it right would require a historically unprecedented feat: To match exponential technological disruption with equally agile moral judgment, political clarity and institutional redesign. It is likely that no society, not even with hindsight, has ever achieved such a feat. We survived the Industrial Revolution, painfully, unevenly, and only with time. However, as Hassabis and Amodei have made clear, we do not have much time. To adapt systems of law, education, labor and governance for a world of ambient, scalable intelligence would demand coordinated action across governments, corporations and civil society. It would require foresight in a culture trained to reward short-term gains, and humility in a sector built on winner-take-all dynamics. Optimism is not misplaced, it is conditional on decisions we have shown little collective capacity to make. Delay without excuse It is tempting to believe we can accurately forecast the arc of the AI era, but history suggests otherwise. On the one hand, it is entirely plausible that the AI revolution will substantially improve life as we know it, with advances such as clean fusion energy, cures for the worst of our diseases and solutions to the climate crisis. But it could also lead to large-scale unemployment or underemployment, social upheaval and even greater income inequality. Perhaps it will lead to all of this, or none of it. The truth is, we simply do not know. On a "Plain English" podcast, host Derek Thompson spoke with Cal Newport, a professor of computer science at Georgetown University and the author of several books including "Deep Work." Addressing what we should be instructing our children to be prepared for the age of AI, Newport said: "We're still in an era of benchmarks. It's like early in the Industrial Revolution; we haven't replaced any of the looms yet. ... We will have much clearer answers in two years." In that ambiguity lies both peril and potential. If we are, as Newport suggests, only at the threshold, then now is the time to prepare. The future may not arrive all at once, but its contours are already forming. Whether AI becomes our greatest leap or deepest rupture depends not only on the models we build, but on the moral imagination and fortitude we bring to meet them. If socially harmful impacts from AI are expected within the next five to 10 years, we cannot wait for them to fully materialize before responding. Waiting could equate to negligence. Even so, human nature tends to delay big decisions until crises become undeniable. But by then, it is often too late to prevent the worst effects. Avoiding that with AI requires imminent investment in flexible regulatory frameworks, comprehensive retraining programs, equitable distribution of benefits and a robust social safety net. If we want AI's future to be one of abundance rather than disruption, we must design the structures now. The future will not wait. It will arrive with or without our guardrails. In a race to powerful AI, it is time to stop behaving as if we are still at the starting line.
[4]
AI winter - where are the gains from the planet's massive AI uptake? Why the productivity promises aren't materializing...
Back in the Spring, the UK's digital government and AI minister Feryal Clark claimed at a Westminster conference that AI offered a future of "infinite productivity", plus equality across every tier of society. Meanwhile, some vendors have boasted that AIs will make humans 10 times more productive. So, after more than three years of the AI Spring, and with OpenAI claiming that 10 percent of the world's population now uses ChatGPT, surely there is ample evidence of that massive uptick in productivity? Nope. In the UK, the government estimates that Q4 2024 output per hour was 1.6 percent higher year on year, but output per hour worked was 0.8 percent lower - because employees are working longer hours, not fewer. However, the Office for National Statistics (ONS) notes that growth was mainly driven by the transport and storage industries, not by the AI-focused services sectors. So, UK productivity growth has largely increased - slightly, but not to pre-financial crisis levels - because people in logistics are working longer hours. And at present, output per hour worked is only 0.9 percent higher than pre-pandemic (and pre-ChatGPT) levels - significantly less than the average 2.2 percent productivity growth the UK experienced prior to the 2008-09 financial crisis. Not exactly infinite, then. But what about the US? According to recent research from AlphaTarget - which, annoyingly for AI industry leaders, tracks research on disruptive stocks - macro growth has failed to surge, despite a 45.9 percent increase in Large Language Model (LLM) and chatbot usage among US workers. Non-agricultural business productivity rebounded 2.4 percent in Q2 2025 after a 1.8 percent drop in Q1 2025. That's not evidence of an AI-related productivity surge, therefore; it's a minor correction after an uncertain first quarter of Trump v2.0. According to AI engineer and analyst Rohan Paul, whose Rohan's Bytes blog covers "the race towards Artificial General Intelligence (AGI)" (a term that OpenAI Chief Executive Officer Sam Altman has claimed is no longer "super-useful"): The Organization for Economic Co-operation and Development's (OECD) July 2025 compendium similarly notes that generative AI's impact is 'not yet evident' in cross-country productivity statistics. If LLMs had made typical workers 10 times faster, you would expect a much clearer macro signal by mid-2025. Indeed, that is the inescapable conclusion. Paul adds that the OECD's July 2025 compendium finds that generative AI's impact is "not yet evident" and expresses the personal view that generative AI is looking like a "general-purpose tech". If so, it will demand significant complementary investments to make it useful in vertical or horizontal applications. Meanwhile, my own report earlier this week revealed that a massive chunk of ChatGPT, chatbot, LLM, and generative AI deployment is shadow IT - individuals within enterprises using unsanctioned AI tools tactically to save time, cut corners, create content, transcribe meetings, and brainstorm ideas (via uncredited, unknown sources spun up by the AI). Hardly the societal and labor transformation that AI Chief Executive Officers have promised, plus a security, privacy, accountability, and (therefore) regulatory nightmare for Chief Executive Officers and Senior Risk Officers - one that sees AI as a corporate governance black hole, in fact. My report notes: While the shadow-IT dimension of AI adoption has been known for at least two years, Manage Engine finds that the problem is growing in the enterprise: 61 percent report that shadow usage has either increased a great deal (one-third of respondents) or somewhat (28 percent). And while the rest say that growth has stayed the same (18 percent) or fallen off slightly (13 percent), no one reports that employees have stopped using these tools. Writing on X, German net activist Pit Schulz notes: As with the recent shift of GPT-5, LLMs will be increasingly used for consumer convenience with the effect of dumbing down the workforce and little productivity growth (relative surplus value), intensifying work without holistic societal gains. Moral depreciation spurs overinvestment, deindustrialization, and resource misallocation. Shallow LLM adoption fails to reshape labor organization for the greater good. Ouch. Dr Benjamin Bratton, Professor of Philosophy of Technology at University of California San Diego (UCSD), and Director of 'philosophy of technology' think tank Antikythera, adds: They [workers] are slathering AI onto processes, systems and norms that an AI-first society doesn't need. The 'productivity' is not doing the same stuff faster but doing the entire value chain differently. Whatever the truth is, it seems that OpenAI's Altman had a sudden outbreak of candor this month. He not only notes that AGI is no longer useful (to him) as an aim, but also admits that the launch of GPT-5 was fumbled - adding that some investors will lose a ton of money. "That sucks," he says, while no doubt attempting to maneuver his permanent smirk into a Sad Face emoji for his fans. Indeed, just days after sharing GPT-5 with the planet, Altman announces that he is pinning his hopes on GPT-6; the spin, it seems, is starting again. That aside, Altman also says that he is willing to spend "trillions of dollars" on building out OpenAI's data center capacity. But there is a problem with that strategy: he doesn't have trillions of dollars, so needs others to put up the investment for him and cover his hardware, compute, energy, and water costs - to run a business that is, in part, built on others' scraped intellectual property. Classy. But his comments take on a stranger dimension this week than is usual in the reality distortion field: OpenAI is considering following the Amazon model. It plans to create new revenue streams by renting out its spare cloud computing capacity, according to comments made by Chief Finance Officer Sarah Friar. Renting out compute that others have paid for? Even classier, Altman. And yes, I am being sarcastic. Even so, The Guardian reports that the ChatGPT maker is "on the cusp of becoming the world's most valuable private company" - albeit one that is still not for profit. At least, until Altman gets his wish of scrapping its founding purpose to become a general-purpose for-profit software company, with its eyes on X and Meta. The Guardian, Bloomberg, and others, report that OpenAI is in talks to sell six billion dollars in shares, boosting its valuation to a potential 500 billion dollars. As the stock is not publicly listed, it would be sold to existing investors such as SoftBank and Thrive capital - constituting a massive transfer of equity from employees to Venture Capital (VC) backers. But enough about OpenAI. What else is going on in the sector? Meta has frozen AI hiring - including internal staff transfers - in the wake of an obscene job-offers frenzy that promised hundreds of millions of dollars to AI engineers (yet still the company can't afford to pay for licensed training data). That does not spell confidence in the future. The social platform giant is also battling anger at revelations by Reuters that Meta's internal guidance notes permitted its Llama AIs to have provocative and sensual chats... with children. On 14 August Reuters reports: An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company's Artificial Intelligence creations to 'engage a child in conversations that are romantic or sensual,' generate false medical information, and help users argue that Black people are 'dumber than white people'. Reuters quotes Meta's guidance as saying: It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art')," the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that "every inch of you is a masterpiece - a treasure I cherish deeply." But Reuters adds: The guidelines put a limit on sexy talk: 'It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch').' Yet implicitly, it would seem to allow such chats with a minor older than 13. Reuters continues: Meta spokesman Andy Stone says the company is in the process of revising the document and that such conversations with children never should have been allowed. You're damn right it shouldn't. But Meta is not the only AI company embroiled in controversy over provocative AI characters - X.ai's 300 dollar 'Ani' anime assistant, for example - or privacy and security breaches. This week it was revealed that hundreds of thousands of Grok chats have been indexed by search engines and are thus available publicly online, presumably without the prompters' knowledge or consent. What have your friends been asking Grok? Find out online! Indeed, consent seems to be an antiquated concept in the AI age. After all, X opted account holders into their tweets being used to train Grok without their informed consent; and, of course, countless AI companies simply scraped the world's pre-2023 digitized content, including copyrighted and proprietary work, without consent, credit, or payment - in some cases turning to known pirate sources to save money on licensing. But for outrageous, barefaced cheek, look no further than AI provider Unity. Its Guiding Principles for users states: Unity users are ultimately responsible for ensuring that their use of Unity AI complies with our acceptable use principles. Importantly, you are responsible for ensuring your use of Unity AI and any generated assets do not infringe on third-party rights and are appropriate for your use. As with any asset used in a Unity project, it remains your responsibility to ensure you have the rights to use content in your final build. Nothing to see here, just an AI company claiming that it is the user's fault if its AI produces outputs that are based on unlicensed proprietary and copyrighted content - what could be seen as a revolting legal fiction that seeks to absolve the company of responsibility for how training data was obtained, while dumping all the legal ramifications on users. Will that stand up in court? I doubt it. As I noted in my previous report, if AI winter is approaching, then it can't arrive soon enough. This is an industry that is out of control and in need of an urgent reset and policy reappraisal. So, how have we reached this lamentable stage? Hype, social hysteria, and a collective abandonment of common sense. Oh... and massive VC dollars that are in desperate need of payback.
[5]
MIT Artificial Intelligence (AI) report fallout - if AI winter is coming, it can't arrive soon enough
Back in March, I warned that, despite the blooming AI Spring, an AI winter might soon be upon us, adding that a global reset of expectations was both necessary and overdue. There are signs this month that it might be beginning: the Financial Times warns this week that AI-related tech shares are taking a battering, including NVIDIA (down 3.5%), Palantir (down 9.4%), and Arm (down five percent), with other stocks following them downwards. The cause? In part, a critical MIT report, revealing that 95% of enterprise gen AI programs are failing, or return no measurable value. More on that in a moment. But the absurdity of today's tech-fueled stock market was revealed by one Financial Times comment: The tech-heavy Nasdaq Composite closed down 1.4 percent, the biggest one-day drop for the index since 1 August. That a single-digit one-day drop three weeks after another single-digit drop might constitute a global crisis indicates just how jittery and short-termist investor expectations have become. The implication is that, behind the hype and Chief Executive Officers' (CEOs) nonsensical claims of Large Language Models' (LLMs) superintelligence, are thousands of nervous investors biting their lips, knowing that the bubble might burst at any time. As the Economist notes recently, some AI valuations are "verging on the unhinged", backed by hype, media hysteria, social media's e/acc cultists, and the unquestioning support of politicians desperate for an instant source of growth - a future of "infinite productivity", no less, in the words of UK AI and digital government minister Feryal Clark in the Spring. The tell comes last week from the industry's leader and by far its biggest problem: OpenAI CEO Sam Altman, a man whose every interview with client podcasters should be watched with the sound off, so you can focus on his body language and smirk, which scream "Everything I'm telling you is probably BS". In a moment of atypical - yet cynical - candor, Altman says: Are investors over excited? My opinion is yes. [...] I do think some investors are likely to lose a lot of money, and I don't want to minimize that, that sucks. There will be periods of irrational exuberance. But, on the whole, the value for society will be huge. Nothing to see here: just the man who supported the inflated bubble finally acknowledging that the bubble exists, largely because AI has industrialized laziness rather than made us smarter. And this comes only days after the fumbled launch of GPT-5, a product so far removed from artificial general intelligence (AGI) as to be a remedial student. (And remember, AGI - the founding purpose of OpenAI - is no longer "a super-useful term", according to Altman.) The sector's problems are obvious and beg for a global reset. A non-exhaustive list includes: First, vendors scraped the Web for training data - information that may have been in a public domain (the internet) but which was not always public-domain in terms of rights. As a result, they snapped up copyrighted content of every kind: books, reports, movies, images, music, and more, including entire archives of material and pirate libraries of millions of books. Soon, the Web was awash with 'me too' AIs that, far from curing cancer or solving the world's most urgent problems, offered humans the effort-free illusion of talent, skill, knowledge, and expertise - largely based on the unauthorized use of intellectual property (IP) - for the price of a monthly subscription. Suddenly AIs composed songs, wrote books, made videos, and more. This exploitative bilge devalued human skill and talent, and certainly its billable potential, spurring an outcry from the world's creative sectors. After all, the training data's authors and rightsholder receive nothing from the deal. Second, the legal repercussions of all this are just beginning for vendors. Anthropic is facing an existential crisis in the form of a class action by (potentially) every US author whose work was scraped, the plaintiffs allege, from the LibGen pirate library. Meta is known to have exploited the same resource rather than license that content - according to a federal judge in June - while a report from Denmark's Rights Alliance in the Spring revealed that other vendors had used pirated data to train their systems. But word reaches me this week that the legal fallout does not just concern copyright: the first lawsuits are beginning against vendors' cloud-based tools disclosing private conversations to AIs without the consent of participants. The first player in the spotlight? Our old friend, Otter AI. Last year I reported how this once-useful transcription tool had become so "infected" with AI functions that it had begun rewriting history and putting words in people's mouths, flying in data from unknown sources and crediting it to named speakers. As a result, it had become too dangerous to use. In the US, Otter is now being sued by the plaintiff Justin Brewer ("individually and on behalf of others similarly situated" - a class action) for its Notetaker service disclosing the words of meetings - including of participants who are not Otter subscribers - to its GPT-based AI. Brewer's conversations were "intercepted" by Otter, the suit alleges. Clause Three of the action says: Otter does not obtain prior consent, express or otherwise, of persons who attend meetings where the Otter Notetaker is enabled, prior to Otter recording, accessing, reading, and learning the contents of conversations between Otter account holders and other meeting participants. Moreover, Otter completely fails to disclose to those who do set up Otter to run on virtual meetings, but who are recorded by the Otter Notetaker, that their conversations are being used to train Otter Notetaker's automatic speech recognition (ASR) and Machine Learning (ML) models, and in turn, to financially benefit Otter's business. Brewer believes that this breaches both federal and California law - namely, the Electronic Communications Privacy Act of 1986; the Computer Fraud and Abuse Act; the California Invasion of Privacy Act; California's Comprehensive Computer Data and Fraud Access Act; the California common law torts of intrusion upon seclusion and conversion; and the California Unfair Competition Law. That's quite a list. Speaking as a journalist, these same problems risk breaching confidentiality when tools like Otter record and transcribe interviews with, for example, CEOs, academics, analysts, spokespeople, and corporate insiders and whistleblowers. Who would consent to an interview if the views of named speakers, expressed in an environment of trust, might be disclosed to AI systems, third-party vendors, and unknown corporate partners, without the journalist's knowledge - let alone the interviewee's. Thanks for speaking to me, Whistleblower X. Do you consent to your revelations being used to train ChatGPT? Third, AI companies walled off all that scraped data and creatives' IP and began renting it back to us, causing other forms of Web traffic to fall away. As I noted last week, 60% of all Google Web searches are already 'zero click', meaning that users never click out to external sources. More and more data is consumed solely within AI search, a trend that can only deepen as Google transitions to AI Mode. My report added: Inevitably, millions of websites and trusted information sources will wither and die in the AI onslaught. And we will inch ever closer to the technology becoming what I have long described as a coup on the world's digitized content - or what Canadian non-profit media organization The Walrus called 'a heist' this month. Thus, we are becoming unmoored from authoritative, verifiable data sources and cast adrift on an ocean of unreliable information, including hallucinations. As I noted in an earlier report this month, there have been at least 150 cases of experienced US lawyers presenting fake AI precedent in court. What other industries are being similarly undermined? Fourth, as previously documented, some AI vendors' data center capex is at least an order of magnitude higher than the value of the entire AI software market. As a result, they need others to pay their hardware costs - including, perhaps, nations within new strategic partnerships. Will those sums ever add up? Then, fifth, there are the energy and water costs associated with training and using data-hungry AI models. As I have previously noted, cloud data centers already use more energy than the whole of Japan, and AI will force that figure much higher. Meanwhile, in a moment of absurdist comedy, the British Government last week advised citizens to delete old emails and photos because data centers use "vast amounts of water" and risk triggering a drought. A ludicrous announcement from a government that wants to plough billions into new AI facilities. Sixth, report after report reveals that most employees use AI tactically to save money and time, not strategically to make smarter decisions. So bogus claims of AGI and superintelligence miss the point of enterprises' real-world usage - much of which is shadow IT, as my report on accountability demonstrated this week. But seventh, if some people are using AI to make smarter decisions rather than cut corners and save time, several academic reports have revealed that it is having the opposite effect: in many cases, AI is making us dumber, lazier, and less likely to think critically or even retain information. This 206-page study on arXiv examining the cognitive impacts of AI assistance is merely one of example many. And eighth in our ever-expanding list of problems, AI just isn't demonstrating a hard Return on Investment (ROI) - either to most users, or to those nervous investors who are sitting on their hands in the bus of survivors (to quote the late David Bowie). Which brings us back to that MIT Nanda Initiative report, which has so alarmed the stock market - not to mention millions of potential customers. 'The GenAI Divide: State of AI in Business 2025' finds that enterprise gen AI programs fall short in a stunning 95% of cases. Lead author Aditya Challapally explains to Forbes that generic tools like ChatGPT excel for individuals because of their flexibility - other reports find that much 'enterprise adoption' is really those individuals' shadow IT usage - but they stall in enterprise use since they don't learn from or adapt to workflows. Forbes adds a note of warning about the employment challenge in many organizations too: Workforce disruption is already underway, especially in customer support and administrative roles. Rather than mass layoffs, companies are increasingly not backfilling positions as they become vacant. Taking all these challenges together, we have an alarming picture: overvalued stocks; an industry fueled by absurd levels of cultish hype, spread unquestioningly on social platforms that amplify CEOs' statements, but never challenge them; rising anger in many communities about industrialized copyright theft and data laundering; absent enterprise ROI; high program failure rates; naïve politicians; soaring energy and water costs; and systems that, while teaching people to ask the right questions, often fail to help them learn the right answers. Meanwhile, we are told that AI will cure cancer and solve the world's problems while, somehow, delivering miraculous growth, equality, and productivity. Yet in the real world, it is mainly individuals using ChatGPT to cut corners, generate text and rich media (based on stolen IP), and to transcribe meeting notes - and even that function may prove to be illegal. Time for a reset. The world needs realism, not cultish enablement.
Share
Share
Copy Link
A comprehensive look at the current state of AI adoption in businesses, highlighting the contrast between widespread unofficial use and struggling official implementations, while also examining productivity impacts and emerging legal challenges.
A recent MIT report has revealed a surprising trend in AI adoption within businesses. While headlines have focused on the apparent failure of 95% of enterprise AI pilots, the reality is more nuanced. The study uncovered a thriving "shadow AI economy" where employees are using personal AI tools for work tasks at an unprecedented rate
1
.Source: The Register
According to the research, 90% of employees regularly use personal AI tools for work, even though only 40% of their companies have official AI subscriptions
2
. This grassroots adoption has outpaced the early spread of technologies like email and smartphones in corporate environments.Despite the widespread adoption of AI tools, the promised productivity gains have yet to materialize on a macro scale. In the UK, productivity growth remains modest, with Q4 2024 output per hour only 1.3% higher year-on-year
4
. The US has seen similar trends, with non-agricultural business productivity rebounding 2.3% in Q2 2025 after a 1.7% drop in Q1 20254
.These figures fall short of the "infinite productivity" promised by some politicians and the tenfold productivity increase touted by certain vendors
4
. The Organization for Economic Co-operation and Development (OECD) noted in July 2025 that generative AI's impact is "not yet evident" in cross-country productivity statistics4
.The MIT study highlighted a significant disparity between consumer AI tools and enterprise solutions. Many corporate AI systems lack what researchers call "learning capability," failing to retain feedback, adapt to context, or improve over time
2
. This rigidity has led to a preference for consumer tools like ChatGPT over expensive, bespoke enterprise solutions.Interestingly, the research found that external partnerships with AI vendors were twice as successful as internally built tools, with 67% of external partnerships reaching deployment compared to 33% for internal projects
2
.As AI adoption accelerates, legal challenges are emerging. Companies like Anthropic face class-action lawsuits over the alleged use of copyrighted material in training data
5
. Meta has also been accused of exploiting pirated content for AI training5
.Privacy concerns are also coming to the forefront. Otter AI, a transcription tool, is facing a lawsuit for allegedly disclosing private conversations to its AI without participant consent
5
. This case highlights the potential legal risks associated with AI-enhanced productivity tools.Source: VentureBeat
Related Stories
Despite these challenges, AI continues to transform the workplace. OpenAI's CEO Sam Altman describes the current era of AI as "10 times bigger than the Industrial Revolution, and maybe 10 times faster"
3
. However, he also acknowledges that some investors may lose money due to inflated expectations5
.As the industry evolves, there's a growing call for a reset of expectations and the development of robust frameworks to manage AI's impact. Experts emphasize the need for targeted retraining, social protections, and institutional clarity to address potential job displacements and ensure equitable AI integration across society
3
.Source: diginomica
The AI landscape in business is complex and rapidly evolving. While shadow AI adoption demonstrates the technology's potential, challenges in productivity measurement, enterprise implementation, and legal compliance persist. As the industry matures, finding a balance between innovation, regulation, and societal benefit will be crucial for realizing AI's transformative potential.
Summarized by
Navi
[1]
[3]
[4]
1
Business and Economy
2
Technology
3
Business and Economy