2 Sources
2 Sources
[1]
'Companies that are not set up to quickly adopt AI workers will be at a huge disadvantage': OpenAI Sam Altman warns firms not to fall behind on AI - but notes 'it's going to take a lot of work and some risk'
Altman also looks at possible headwinds affecting AI adoption OpenAI CEO Sam Altman has laid out his vision for the future of how humans and AI will work together, but warned on potentially severe effects for those businesses which have fallen behind. Speaking at the Cisco AI Summit 2026, Altman outlined how AI tools and humanity can collaborate on a wide range of tasks, boosting productivity and efficiency across industires. "The capability of AI feels to me the biggest it's ever been," Altman added. "We are planning for a world where demand will grow at an accelerated pace each year...Companies that are not set up to quickly adopt AI workers will be at a huge disadvantage. And it's going to take a lot of work and some risk." In a wide-ranging fireside chat with Cisco president Jeetu Patel, Altman also discussed the possible headwinds which could affect the AI industry going forward. After a significant pause, he answered, "some sort of global destabilisation, mega supply chain disruption," was the biggest concern. Asked about current problem child of the AI world Moltbook, Altman noted he could envisage a world where AI agents could interact with each other and lead to new types of interactions. This includes via OpenAI's recent-announced Codex app, which promises huge steps forward in coding ability and capabilities. Altman said Codex was like he had "felt another ChatGPT moment" where there is a "clear view of the future of knowledge work and how enterprises and individual people are going to work in a completely different way". "Given an AI agent full access to your computer and your web browser with all your sessions leads to incredible stuff - and that seems here to stay," he added. "OpenAI did an incredible job of bringing many ideas together to make that feel useable and real. That seems certain to be part of our future." But perhaps unsurprisingly, Altman was in a positive mood about AI adoption, noting that many observers are underestimating how much language models will improve. "The models are going to get so much better quickly," he said, predicting a "subjective 10x improvement" in 2026, "we've been trying to figure out how we can communicate about what we think is happening."
[2]
Sam Altman Says Full AI Companies Are Possible, but Businesses Are Not Ready | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. "I can imagine billions of humanoid robots building more data centers and mining for material and building more power plants," the OpenAI chief executive said on Tuesday (Feb. 3) during a conversation with Cisco's CPO Jeetu Patel at Cisco's AI Summit. "I can imagine just the economy growing at an unprecedented rate if there's all sorts of incredible new services and scientific discoveries happening." But the image anchored a broader theme that surfaced repeatedly throughout his remarks: AI's trajectory is no longer confined to narrow use cases or productivity gains. It is moving toward systemic change, even as most enterprises remain structurally unprepared to absorb it. When the conversation turned to where today's AI systems ultimately lead, Altman focused less on near-term deployment and more on capability. " The upper limit, I think, is full AI companies," he said, describing organizations where AI systems are not tools layered onto workflows, but active participants in how work gets done. A key inflection point, he added, is the shift from models that generate outputs to agents that can operate computers directly. "Code is really powerful," Altman said, "but code plus generalized computer use is even much more powerful." Agents that can navigate browsers, applications and authenticated environments allow AI to complete tasks end to end, rather than stopping at recommendations or drafts. Once that interaction model is experienced, Altman suggested, it becomes difficult to think of AI as a passive system waiting for human prompts. He extended that logic beyond individual workflows to coordination and collaboration. Altman described the possibility of entirely new interaction models in which agents communicate with one another on behalf of humans. He positioned it as a natural outcome of increasing capability: interaction systems designed primarily for machines to exchange information and coordinate tasks, rather than for humans to manually manage those exchanges themselves. Despite rapid progress in AI capability, Altman said that the most binding constraints are no longer technical. "How are we going to balance the sort of security and data access versus the utility of all of these models?" he asked. Existing security and permission systems were designed for human users making discrete, intentional requests. They are poorly suited to always-on agents that observe continuously and act across systems. "It feels to me like there is a new kind of security or data access paradigm that needs to be invented for this," Altman said. Until that happens, he said, organizations will continue to limit AI deployment even as capabilities advance. Altman repeatedly returned to what he described as a widening gap between what AI systems can do and what enterprises are prepared to adopt. That gap, in his view, is driven less by the technology itself than by unresolved questions around governance, security and data access. The result is slower adoption, even for tools that already exist. "Figuring out how to set up enterprises such that they can quickly absorb these new tools," Altman said, without years lost to internal friction and access debates, "feels very important." Delays, he warned, could carry competitive consequences. Companies that fail to adapt their structures fast enough may find themselves falling behind, not because the technology is unavailable, but because they are not ready to work alongside it. "I don't want to make it too dramatic of a prediction," Altman said, "but I think the companies that are not set up to be able to adopt, let's call them AI co-workers, very quickly, will be at a huge disadvantage."
Share
Share
Copy Link
OpenAI CEO Sam Altman told the Cisco AI Summit 2026 that companies not set up to quickly adopt AI workers will be at a huge disadvantage. He envisions full AI companies where AI agents actively participate in work rather than serve as passive tools. But Altman warns enterprises are largely unprepared for this shift, facing unresolved challenges around security, governance and data access.
Speaking at the Cisco AI Summit 2026, OpenAI CEO Sam Altman delivered a clear message about the competitive stakes of AI adoption: companies that fail to quickly adopt AI workers will find themselves at a huge disadvantage
1
. In a fireside chat with Cisco president Jeetu Patel, Altman outlined his vision for how AI and humans to collaborate across industries, but cautioned that the transition would require significant work and risk1
. The OpenAI leader emphasized that demand for AI capabilities will grow at an accelerated pace each year, making organizational readiness critical for survival in an increasingly AI-driven economy1
.
Source: TechRadar
Altman described a future that extends far beyond incremental productivity gains, predicting the emergence of full AI companies where AI systems become active participants rather than passive tools
2
. He painted an ambitious picture of billions of humanoid robots building data centers, mining materials, and constructing power plants, driving economic growth at unprecedented rates through scientific discoveries and new services2
. The future of AI agents represents a fundamental shift from models that generate outputs to systems that can operate computers directly, navigating browsers, applications, and authenticated environments to complete tasks end to end2
. Altman noted that once organizations experience AI agents with full access to computers and web browsers, it becomes difficult to view AI as merely a system waiting for human prompts1
.
Source: PYMNTS
Discussing OpenAI's recently announced Codex app, Altman said he experienced another ChatGPT moment, seeing a clear view of how knowledge work will transform completely
1
. The application promises significant advances in coding ability and capabilities, demonstrating how AI agents can interact with each other to create entirely new types of interactions1
. Altman emphasized that code combined with generalized computer use proves even more powerful than code alone, enabling AI to move beyond recommendations to actual execution2
. He extended this logic to envision interaction systems designed primarily for machines to exchange information and coordinate tasks on behalf of humans, rather than requiring manual management of those exchanges2
.Related Stories
Despite the rapid improvement of language models, Altman identified a widening gap between what AI systems can do and what organizations are prepared to integrate AI co-workers into their operations
2
. The most binding constraints are no longer technical but organizational, centered on unresolved questions around governance, security, and data access2
. Existing security and permission systems were designed for human users making discrete requests, making them poorly suited for always-on AI agents that observe continuously and act across systems2
. Altman stressed the need for a new security or data access paradigm, noting that until these challenges are resolved, organizations will continue limiting AI deployment even as capabilities advance2
.Altman maintained an optimistic outlook on AI capabilities, suggesting many observers underestimate how quickly language models will improve
1
. He predicted a subjective 10x improvement in models during 2026, emphasizing that the capability of AI feels bigger than ever before1
. However, the disadvantage of not quickly adopting AI could prove severe for businesses that fail to adapt their structures fast enough2
. Companies may fall behind not because the technology is unavailable, but because they are not ready to work alongside it, with years potentially lost to internal friction and access debates2
. When asked about potential headwinds affecting the AI industry, Altman cited global destabilization and mega supply chain disruption as his biggest concerns1
. The message remains clear: figuring out how to set up enterprises to quickly absorb new AI tools represents a critical priority, as delays could carry significant competitive consequences in an economy experiencing systemic change driven by AI2
.Summarized by
Navi
06 Nov 2025•Business and Economy
11 Jun 2025•Technology

10 Feb 2025•Technology

1
Business and Economy

2
Technology

3
Technology
