3 Sources
3 Sources
[1]
The AI era has a message for every CEO: Adapt or die | Fortune
In a blunt companywide memo last year, Micha Kaufman, the CEO of freelance marketplace Fiverr, had some harsh truths to share with his employees. "AI is coming for your jobs. Heck, it's coming for my job, too. This is a wake-up call," he warned. A year on, he has a message for the C-suite trying to ride out the AI tsunami. "Don't be a cheerleader. If you're not practicing, don't preach," Kaufman tells Fortune. "You can't make AI a value on the wall and then not behave by it." CEOs are currently treating AI as a training problem, he says -- Âbuying products, Ârunning a seminar, and checking a box -- when the real challenge is a cultural one that starts at the top. Across industries, there's palpable angst about the impending AI onslaught and how best to prepare workers, managers, and -- above all -- themselves for the new reality that lies ahead. The technology is moving faster than any organizational playbook can keep up with, and the executives tasked with leading the transition are often figuring it out in real time. What's more, many are seeing a gap between their companies' AI ambitions and the results. There are lots of pilots and hype -- but only a small number of organizations, usually in tech, are seeing transformative gains. "There are many companies that are struggling with some kind of dissonance between the promise of AI and the reality of what they hoped it would be," says Kate Smaje, senior partner and AI lead at McKinsey. "There are firms all over the map." It's a disconnect that's keeping CEOs awake at night. A recent survey by the Harris Poll found that 79% of U.S. CEOs believe they could lose their jobs within two years if they fail to deliver measurable business gains from AI. Part of this is investor pressure over ROI, and part of it is FOMO: Some sectors, like software engineering, have seen massive productivity gains from AI, while others are still grappling with how to implement basic tools. For business leaders trying to prepare, it's a daunting moment. But with so much at stake, it's not one they can ignore. One response to this anxiety has been to shift from an era of experimentation -- in which employees are encouraged to try out AI -- to one of top-down mandates and formal pilots, where employees are required to trial specific tools and demonstrate measurable results. Companies including Meta, Amazon, Salesforce, and Microsoft are cracking down to impose AI adoption within their workforce, mandating, monitoring, and evaluating the use of AI tools. At Meta, new performance review systems can reportedly track how many lines of code an engineer wrote with AI assistance, while Amazon managers have dashboards monitoring individual AI-tool usage that factors into promotion decisions, according to media reports. But tech companies have a history of driving workforce trends before the rest of the business world. And of course, tech companies have their own skin in the game, since they are the ones making -- and selling -- the various AI products. Outside of tech, CEOs are operating with a lot less clarity. As Wharton management professor Peter Cappelli notes, too many executives are still "listening to the people who built the tools" instead of asking whether those same approaches make sense in their own businesses. The builders, he argues, "are not experts in business or in management" -- and yet their success stories are being treated as a universal blueprint. Instead of mandates, some companies are betting on peer-led learning and positive incentives to drive adoption of AI tools. "I think if you take a stick Âapproach right now, you might actually get people basically achieving the right short-term goal but failing the long-term objective, which is building an organization that is much more nimble and resilient," says Greg Hart, CEO of online learning platform Coursera. The stakes for companies of successfully adapting to AI are higher than immediate productivity metrics. And because many employees view AI as a threat to their livelihoods, mandates tend to deepen that anxiety rather than dissolve it. Roughly 55,000 jobs were cut in layoffs that companies attributed directly to AI in 2025, more than three times the total in the preceding two years, according to recruiting firm Challenger, Gray & Christmas. Employee fears were hardly relieved when enterprise software company Atlassian cut 10% of its staff in March, and fintech firm Block slashed 40%. Block CEO Jack Dorsey said that AI tools, paired with "smaller and flatter teams," are fundamentally changing the nature of work and "what it means to build and run a company." Some employees also worry that by using AI at work they're essentially training the automaton that will replace them. Fiverr's Kaufman argues that this is exactly why leaders need to disentangle fear around AI from AI skills. Companies often "collapse" the anxiety conversation around job displacement and the upskilling conversation, making both worse in the process, he says. Fears about displacement are "legitimate" and deserve a direct, honest discussion, not "corporate reassurance theater," Kaufman notes. Only once that's on the table can leaders talk credibly about how roles will change, which categories of work will shrink or grow, and which new skills people actually need to develop. Joseph B. Fuller, a professor of management practice at Harvard Business School, says companies "just have to get comfortable" with spending more now to learn, and resisting the pressure to make premature moves they will later regret. What's called for is a CEO who thinks more like a scientist than a general -- someone comfortable not just overseeing the experiments, but protecting the people running them from being penalized when things don't go to plan. A successful CEO's job is to create the conditions for risk-free experimentation by making sure "the people who are conducting the experiments understand that senior colleagues, up to [and] including the board, realize that what they're doing is a trial," Fuller says. Instead of quietly shelving AI pilot projects that fail to deliver results, Fuller recommends celebrating well-run failures and sharing the knowledge. Coursera's Hart stresses the importance of using this early phase of the AI era to learn and to adjust. "If you focus only on efficiency right now -- given that AI is still in its very early days for what it's going to be able to accomplish -- you're losing an opportunity to think about the really transformative effect that AI can have for your company," he says. Coursera runs monthly "AI spark sessions" where employees volunteer how they are using AI to make their jobs easier and more effective. These sessions are among the most well-Âattended companywide, Hart says, with staff openly sharing tools, workflows, and follow-up resources instead of hiding efficiencies they've discovered. That's especially important for AI projects, where returns on investment are not always immediate. Economists call it the J-curve: Productivity dips before it soars, as companies absorb the costs of learning before reaping the gains. When a now infamous MIT report last year found a majority of AI pilots weren't delivering meaningful returns, investors panicked, treating it as an indictment of AI technology. In fact, the report found that the biggest cause of poor outcomes wasn't the technology itself but a widespread "learning gap," with large organizations lacking the expertise to embed AI meaningfully into their workflows. Startups, unburdened by entrenched processes and office politics, were found to fare considerably better. It's helpful to keep in mind that executives have been here before, and there are valuable lessons from the past. The last time a technology promised to remake business -- when the internet emerged in the 1990s -- most companies bolted it on and hoped for the best. In those early dotcom days, businesses tended to treat the web like a digital brochure rack -- a shinier distribution channel rather than a reason to rethink how they worked. Only when a minority of firms started rebuilding their businesses around the web did the ground really shift under everyone else. What separated the winners from the laggards wasn't access to the technology; it was whether leaders were willing to challenge habits, redesign jobs, and tolerate a messy period of experimentation. In that sense, AI might not be so different. "If you're just bringing AI in, we're already seeing evidence that it won't deliver what you hope," says Aneesh Raman, chief economic opportunity officer at LinkedIn. "Even skilling people on 'how to use AI' only gets you part of the way there. The real impact comes when workers use AI in service of changing their jobs -- redesigning tasks and workflows, not just adding another tool."
[2]
I run the world's largest employee mental health company. Leaders are treating AI adoption as a tech problem. It's not | Fortune
As CEO of ComPsych -- the world's largest provider of employee mental health services -- I spend a lot of time thinking about what's making workers anxious. Right now, AI is at the top of that list. According to a 2025 Pew Research Center report, workers are more worried than hopeful about AI in the workplace. And the consequences of that anxiety go well beyond morale: research shows that when employees believe they are likely to lose their job, the risk of serious psychological distress rises considerably -- along with the likelihood of mental health leave, disengagement, and burnout. Leaders are rolling out AI tools at speed. Most are treating this as a technology challenge. It isn't. It's a people challenge -- and getting it wrong has a measurable cost. Job insecurity driven by AI fear manifests in two damaging patterns. The first is disengagement -- distracted, uncollaborative workers doing the minimum to get by. The second is its opposite: workers who over-compensate by becoming hypersensitive and over-extended, eventually burning out. Both patterns hurt team performance. Both are symptoms of the same underlying failure: leaders who haven't addressed what their employees are actually afraid of. To counteract this, leaders need to focus on building genuine trust. Trust doesn't just happen -- it's earned. The most effective way to build it is through early, open, and consistent communication. The impulse to wait until everything is figured out before communicating is understandable. But in the absence of communication, rumors form, narratives harden, and fear sets in. Sharing relevant information as soon as it's available -- being transparent about what's still being determined and when more details will follow -- allows leaders to inform and reassure simultaneously. In the case of AI, this can be as simple as defining a clear company philosophy. Articulating that AI may change parts of a person's job but does not diminish their value gives leaders the opportunity to set the tone, reduce fear, and support emotional well-being before anxiety takes hold. Clarity about where AI should -- and shouldn't -- have the final word is one of the most important things a leader can establish early. I remind our teams constantly: AI has to earn our trust. If you were collaborating with a new colleague for the first time -- one with only a few years of real-world experience -- you would never take their work, assume it was fully correct, and move forward without carefully reviewing it, providing feedback, or challenging the portions you disagreed with. The same scrutiny applies to anything produced with AI assistance. Beyond quality checks, there are categories of work where human judgment, ingenuity, and creativity must remain the final word. Every organization will need to define its own guardrails. As a company providing mental health services to millions of people worldwide, we've been explicit with our teams: the human expertise of our clinicians is paramount. We are embracing AI to enhance operations and aid patient navigation -- but we will never defer to a large language model when someone is in crisis and needs human-centered care. Setting clear expectations is necessary but not sufficient -- organizations must also actively ensure their employees are growing alongside the technology, not being hollowed out by it. Research from MIT's Media Lab found that heavy reliance on AI tools can atrophy the independent thinking and problem-solving skills people need most. It is incumbent on leaders to ensure teams are using these tools to vault their creative and strategic thinking to new heights -- not as crutches they eventually won't be able to function without. This means actively encouraging imaginative, out-of-the-box thinking, reinforcing the value of individual skills, and investing in continuous learning and development programs. Upskilling is not optional: the skills needed to thrive alongside AI, and the tasks that make up our workdays, will change enormously. The leaders who get this right won't just be the ones who deploy AI fastest. They'll be the ones who brought their people along -- reducing fear, building trust, and preserving the human judgment that no model can replicate. The goal isn't to help your workforce adapt to the future of work. It's to help them build it.
[3]
AI Is Making Leaders Question Their Worth. Here's the Psychological Shift They Must Make.
They will also be the ones who stay steady when comparison rises and tolerate not being the smartest in the room without interpreting it as a threat. Artificial intelligence is not just changing workflows. It is destabilizing our identity. Most discussions around AI focus on efficiency, automation and competitive advantage. But working with founders and executives, I am seeing something quieter and more personal happening beneath the surface. AI is confronting leaders with a question many have never had to answer: Who am I if I am not the differentiator? We live in a society where our career consciously or subconsciously becomes our identity -- who do you become when there is technology at everyone's fingertips that can be you? For years, high performers have regulated their confidence through competence. They built authority by being the most creative, data-driven and strategic thinker in the room. Now, large language models can draft strategy outlines in seconds. Data can be analyzed instantly. Creative assets can be generated on demand. This is not just a technological shift. It is an identity disruption, and that feels scary. Sign up for the Entrepreneur Daily newsletter to get the news and resources you need to know today to help you run your business better. Get it in your inbox. From my perspective as an executive psychology coach, the brain is designed to reduce uncertainty through prediction. The mind is a prediction-making machine. It constantly uses past experience and our sensory present to anticipate what will happen next. When prediction becomes difficult, the nervous system increases vigilance. This doesn't necessarily show up as panic. It can feel like what I call "background noise" and show up behaviorally as vigilance. You might see it in leaders who suddenly feel the need to be in every AI-related conversation, even when it's not necessary. In executives who are checking competitors' AI announcements late at night, not because it's strategic, but because it's hard to turn off the scanning. In founders who insist on personally reviewing every AI-generated output, tightening control instead of expanding trust. It can also show up as pushing teams to "move faster" without a clear integration plan or dismissing AI publicly while privately feeling unsettled. Another example that I see happening often with seasoned leaders is this quiet comparison to younger, more tech-native talent, and wondering if you're falling behind. When identity has been unconsciously tied to exceptional performance, disruption feels like exposure. This is not about ego. It is about conditioning. Many high achievers developed early patterns that linked belonging and safety to output. Achievement became more than success. It became a way to regulate how they feel about themselves. For instance: If I perform, I am valued. And if I am valuable, I am safe. Safe in what way? Safe financially, safe in relationships and safe in identity. When a system can now perform parts of your role faster than you can, that equation destabilizes. Leaders who are unaware of this dynamic will compensate. They may overwork. Over-assert. Over-correct. They may dismiss AI publicly while privately feeling threatened. They may double down on productivity instead of expanding adaptability. This is where executive psychology matters. Performance is not only cognitive. It is physiological. A chronically activated nervous system narrows perception. It reduces cognitive flexibility. It increases black-and-white thinking. It subtly shifts leaders into defensive strategy rather than creative expansion. In volatile markets, defensive leadership becomes expensive. The competitive edge in the AI era will not belong to the leader who can outproduce a machine. It will belong to the leader whose identity is not destabilized by one. That requires internal differentiation. The leaders I see thriving right now are not the loudest adopters nor the fiercest resistors. They are the most internally stable. They can integrate new tools without unconsciously fighting for relevance. That stability comes from understanding your own patterns. If achievement has historically been your regulator, AI will amplify that pattern. If control has been your safety strategy, unpredictability will stress it. A word of hope: Disruption also offers opportunity. When performance is no longer the sole differentiator, leadership shifts toward integration, discernment and relational intelligence. Toward the capacity to hold uncertainty without collapsing into urgency. In other words, the edge becomes psychological. This is where maturity in leadership begins to separate itself from mere competence. When you are no longer competing on output alone, you are forced to compete on presence. On judgment. On the ability to regulate yourself in environments that do not offer clear answers. Artificial intelligence may transform industries, but the leaders who outperform will be the ones who transform their internal operating system first. The ones who can notice their own patterns before those patterns start driving decisions. The ones who can stay steady when comparison rises. The ones who can tolerate not being the smartest voice in the room without interpreting it as a threat. There has never been a more critical time to do the inner work, mentally, emotionally and psychologically, than right now. This has to be a non-negotiable, and it can't be a "when I get to it" or something you do for luxury. It is now your leadership responsibility. Because in a world that is accelerating, steadiness becomes power.
Share
Share
Copy Link
A new survey reveals 79% of U.S. CEOs believe they could lose their jobs within two years if they fail to deliver measurable business gains from AI. The crisis extends beyond the C-suite, with roughly 55,000 jobs cut in AI-related layoffs in 2025 alone—more than three times the total in the preceding two years. Leaders are discovering that AI adoption isn't a technology problem but a cultural and psychological challenge that demands a fundamental shift in how they lead.

The pressure on corporate leadership has reached a critical threshold. A recent Harris Poll survey found that 79% of U.S. CEOs believe they could lose their jobs within two years if they fail to deliver measurable business gains from AI
1
. This anxiety reflects both investor pressure over ROI and competitive FOMO, as some sectors like software engineering have seen massive productivity gains from AI while others struggle with basic implementation.Micha Kaufman, CEO of freelance marketplace Fiverr, delivered a blunt warning to his employees last year: "AI is coming for your jobs. Heck, it's coming for my job, too." Now he has a message for the C-suite trying to navigate the AI tsunami: "Don't be a cheerleader. If you're not practicing, don't preach"
1
. CEOs are treating AI as a training problem—buying products, running seminars, and checking boxes—when the real challenge is a cultural one that starts at the top.Many companies are struggling with dissonance between the promise of AI and the reality of what they hoped it would be, according to Kate Smaje, senior partner and AI lead at McKinsey
1
. The technology is moving faster than any organizational playbook can keep up with, and executives tasked with leading the transition are often figuring it out in real time. There are lots of pilots and hype, but only a small number of organizations, usually in tech, are seeing transformative gains.Companies including Meta, Amazon, Salesforce, and Microsoft are cracking down to impose AI adoption within their workforce, mandating, monitoring, and evaluating the use of AI tools. At Meta, new performance review systems can reportedly track how many lines of code an engineer wrote with AI assistance, while Amazon managers have dashboards monitoring individual AI-tool usage that factors into promotion decisions
1
.The human cost of this transition is substantial. Roughly 55,000 jobs were cut in layoffs that companies attributed directly to AI in 2025, more than three times the total in the preceding two years, according to recruiting firm Challenger, Gray & Christmas
1
. Enterprise software company Atlassian cut 10% of its staff in March, while fintech firm Block slashed 40%. Block CEO Jack Dorsey said that AI tools, paired with "smaller and flatter teams," are fundamentally changing what it means to build and run a company.According to a 2025 Pew Research Center report, workers are more worried than hopeful about AI in the workplace
2
. Research shows that when employees believe they are likely to lose their job, the risk of serious psychological distress rises considerably—along with the likelihood of mental health leave, disengagement, and burnout. Some employees worry that by using AI at work they're essentially training the automaton that will replace them.Leaders are rolling out AI tools at speed, but most are treating this as a technology challenge when it's actually a people challenge, argues the CEO of ComPsych, the world's largest provider of employee mental health services
2
. Job insecurity driven by AI fear manifests in two damaging patterns: disengagement—distracted workers doing the minimum to get by—and its opposite, workers who over-compensate by becoming hypersensitive and over-extended, eventually burning out.To counteract this, leaders need to focus on building genuine trust through early, open, and consistent communication. In the absence of communication, rumors form, narratives harden, and fear sets in. Sharing relevant information as soon as it's available—being transparent about what's still being determined—allows leaders to inform and reassure simultaneously
2
. Articulating that AI may change parts of a person's job but does not diminish their value gives leaders the opportunity to set the tone and reduce fear before anxiety takes hold.AI is confronting leaders with a question many have never had to answer: Who am I if I am not the differentiator?
3
For years, high performers have regulated their confidence through competence. They built authority by being the most creative, data-driven, and strategic thinker in the room. Now, large language models can draft strategy outlines in seconds. This isn't just a technological shift—it's an identity disruption.From an executive psychology perspective, when identity has been unconsciously tied to exceptional performance, disruption feels like exposure. Many high achievers developed early patterns that linked belonging and safety to output. Achievement became more than success—it became a way to regulate how they feel about themselves
3
. When a system can now perform parts of your role faster than you can, that equation destabilizes.Related Stories
Clarity about where AI should—and shouldn't—have the final word is one of the most important things a leader can establish early
2
. Organizations must actively ensure their employees are growing alongside the technology, not being hollowed out by it. Research from MIT's Media Lab found that heavy reliance on AI tools can atrophy the independent thinking and problem-solving skills people need most.Greg Hart, CEO of online learning platform Coursera, warns that a stick approach might achieve short-term goals but fail the long-term objective of building an organization that is much more nimble and resilient
1
. The stakes for companies of successfully adapting to AI are higher than immediate productivity metrics. Leaders must actively encourage imaginative thinking, reinforce the value of individual skills, and invest in continuous learning programs.The competitive edge in the AI era will not belong to the leader who can outproduce a machine. It will belong to the leader whose identity is not destabilized by one
3
. When performance is no longer the sole differentiator, leadership shifts toward integration, discernment, and relational intelligence. Toward the capacity to hold uncertainty without collapsing into urgency.Wharton management professor Peter Cappelli notes that too many executives are still "listening to the people who built the tools" instead of asking whether those approaches make sense in their own businesses
1
. The builders are not experts in business or in management, yet their success stories are being treated as a universal blueprint. The leaders who get this right won't just be the ones who deploy AI fastest—they'll be the ones who brought their people along, reducing fear, building trust, and preserving the human judgment that no model can replicate.Summarized by
Navi
25 Feb 2026•Business and Economy

13 Jan 2026•Business and Economy

22 Aug 2025•Business and Economy
