Curated by THEOUTPOST
On Thu, 23 Jan, 12:06 AM UTC
4 Sources
[1]
Why are most companies failing to benefit from AI? It's about the people not the tech
Successful uptake of new technology is a matter of emotions -- and with 4 in 5 companies saying they're failing to capitalise on its potential, managers need to know how to deal with them, say researchers from Aalto University. AI has the potential to enhance decision-making, spark innovation and help leaders boost employees' productivity, according to recent research. Many large companies have invested accordingly, in the form of both funding and effort. Yet despite this, studies show that they are failing to achieve the expected benefits, with as many as 80 percent of companies reporting a failure to benefit from the new technology. 'Often employees fail to embrace new AI and benefit from it, but we don't really know why,' says Assistant Professor Natalia Vuori from Aalto University. Our limited understanding stems partly from the tendency to study these failings as limitations of the technologies themselves, or from the perspective of users' cognitive judgments about AI performance, she says. 'What we learned is that success is not so much about technology and its capabilities, but about the different emotional and behavioural reactions employees develop towards AI -- and how leaders can manage these reactions," says Vuori. Her research team followed a consulting company of 600 employees for over a year as it attempted to develop and implement the use of a new artificial intelligence tool. The tool was supposed to collect employees' digital footprints and map their skills and abilities, ultimately building a capabilities map of the company. The results were supposed to streamline the team selection process for consulting projects, and the whole experiment was, in fact, a pilot for AI software they hoped to offer their own customers. After almost two years, the company buried the experiment -- and the proposed product. So what happened? It turns out, although some staff believed that the tool performed well and was very valuable, they were not comfortable with AI following their calendar notes, internal communications and daily dealings. As a result, employees either stopped providing information altogether, or they started manipulating the system by feeding it information they thought would benefit their career path. This led to the AI becoming increasingly inaccurate in its output, feeding a vicious cycle as users started losing faith in its abilities. 'Leaders couldn't understand why the AI usage was declining. They were taking a lot of action to promote the tools and so on, trying to explain how they use the data, but it didn't help,' says Vuori, who believes this case study reflects a common pattern when it comes to AI uptake, and tech adoption generally. The team is now collecting data on the use of Microsoft's widely used Copilot AI software, which is so far yielding similar findings. What should leaders do? Researchers found that people fell into the same four groups in terms of their reaction to the new technology. Distinguishing between cognitive trust; whether a person believes the technology performs well, and emotional trust; their feelings towards the system, the groups were: full trust, full distrust, uncomfortable trust and blind trust. People in the first group had high trust both on the cognitive and emotional level, whereas people in the second group scored low on both. Uncomfortable trust signified high cognitive trust but low emotional trust, and vice versa for blind trust. The less people trusted the tool emotionally, the more they restricted, withdrew or manipulated their digital footprint, and it was particularly notable that this held true even if they had cognitive trust in the technology. The findings give companies the chance to strategise a more successful approach to AI uptake. "AI adoption isn't just a technological challenge -- it's a leadership one. Success hinges on understanding trust and addressing emotions, and making employees feel excited about using and experimenting with AI," says Vuori. "Without this human-centered approach, and strategies that are tailored to address the needs of each group, even the smartest AI will fail to deliver on its potential."
[2]
Why are 80 percent of companies failing to benefit from AI? It's about the people not the tech, says new study | Newswise
Newswise -- AI has the potential to enhance decision-making, spark innovation and help leaders boost employees' productivity, according to recent research. Many large companies have invested accordingly, in the form of both funding and effort. Yet despite this, studies show that they are failing to achieve the expected benefits, with as many as 80 percent of companies reporting a failure to benefit from the new technology. 'Often employees fail to embrace new AI and benefit from it, but we don't really know why,' says Assistant Professor Natalia Vuori from Aalto University. Our limited understanding stems partly from the tendency to study these failings as limitations of the technologies themselves, or from the perspective of users' cognitive judgments about AI performance, she says. 'What we learned is that success is not so much about technology and its capabilities, but about the different emotional and behavioural reactions employees develop towards AI -- and how leaders can manage these reactions," says Vuori. Her research team followed a consulting company of 600 employees for over a year as it attempted to develop and implement the use of a new artificial intelligence tool. The tool was supposed to collect employees' digital footprints and map their skills and abilities, ultimately building a capabilities map of the company. The results were supposed to streamline the team selection process for consulting projects, and the whole experiment was, in fact, a pilot for AI software they hoped to offer their own customers. After almost two years, the company buried the experiment -- and the proposed product. So what happened? It turns out, although some staff believed that the tool performed well and was very valuable, they were not comfortable with AI following their calendar notes, internal communications and daily dealings. As a result, employees either stopped providing information altogether, or they started manipulating the system by feeding it information they thought would benefit their career path. This led to the AI becoming increasingly inaccurate in its output, feeding a vicious cycle as users started losing faith in its abilities. 'Leaders couldn't understand why the AI usage was declining. They were taking a lot of action to promote the tools and so on, trying to explain how they use the data, but it didn't help,' says Vuori, who believes this case study reflects a common pattern when it comes to AI uptake, and tech adoption generally. The team is now collecting data on the use of Microsoft's widely used Copilot AI software, which is so far yielding similar findings. Researchers found that people fell into the same four groups in terms of their reaction to the new technology. Distinguishing between cognitive trust; whether a person believes the technology performs well, and emotional trust; their feelings towards the system, the groups were: full trust, full distrust, uncomfortable trust and blind trust. People in the first group had high trust both on the cognitive and emotional level, whereas people in the second group scored low on both. Uncomfortable trust signified high cognitive trust but low emotional trust, and vice versa for blind trust. The less people trusted the tool emotionally, the more they restricted, withdrew or manipulated their digital footprint, and it was particularly notable that this held true even if they had cognitive trust in the technology. The findings give companies the chance to strategise a more successful approach to AI uptake. "AI adoption isn't just a technological challenge -- it's a leadership one. Success hinges on understanding trust and addressing emotions, and making employees feel excited about using and experimenting with AI," says Vuori. "Without this human-centered approach, and strategies that are tailored to address the needs of each group, even the smartest AI will fail to deliver on its potential." The research findings were published in the Journal of Management Studies on 22 January:
[3]
AI integration is about people, not technology, says report
Introducing advanced AI into an organisation is never going to be a seamless transition, but companies can make it easier on themselves by focusing on the right aspects. Research suggests that around 266m organisations worldwide are either using or experimenting with AI, which is roughly 82pc of companies globally. Whether it is through content creation, HR and asset management, workflow, analytics or one of the dozens of other case uses, for organisational AI, it is a technology that is quickly becoming a company staple. But for assistant professor at Aalto University Natalia Vuori, who is also a co-author of a recent study into the impact of AI adoption on employees, the successful uptake of AI is a matter of emotions and company leaders are failing to capitalise on it. Vuori's study, titled "It's Amazing - But Terrifying: Unveiling the Combined Effect of Emotional and Cognitive Trust on Organisational Member' Behaviours, AI Performance, and Adoption, was published yesterday (22 January) in the Journal of Management Studies. For more than a year, her research team studied a team of 600 employees in a consulting company, to determine how they managed the development and introduction of a new AI tool. "What we learned is that success is not so much about technology and its capabilities, but about the different emotional and behavioural reactions employees develop towards AI and how leaders can manage these reactions," said Vuori. The idea behind the AI-powered tool was that it would gather information on an employee's digital footprint, tracking their skills and abilities, which could then be used to build a map of the company's overall capabilities. The data could inform organisational leaders and enable the streamlining of the team selection process for consulting projects. The end hope was that the tool would eventually reach a stage where it could be offered as an aid to the company's own consumers, however, after almost two years, it was scrapped. But why? Transparency is power AI integration is a tricky beast and despite a company's best efforts, any number of factors can cause the project to be delayed, derailed or just a disaster. Often it is because of a lack of strategic planning, other times resources are in short supply or are of a less than ideal quality. Sometimes it is the external factors that cause the most internal disruption. We are more than just a place to work The most actionable data to drive superior results Unlocking the potential of biology for patients Join a culture that offers a world of possibilities For the consulting company that participated in the study, as shown in the report, it was a disconnect between the technology, leadership and the employee experience that led to the downfall of AI integration into this particular organisation. While many of the workforce found the tool to be a potentially valuable asset, overall they were uncomfortable with the depth of information that was being collected, namely calendar notes, internal communications and daily activities. Consequently, employees began to omit and manipulate data to ensure that they would not adversely or unknowingly harm their future career prospects, which rendered the AI tool ineffective and inaccurate, making it difficult for anyone in the organisation to have faith in its potential. "Leaders couldn't understand why the AI usage was declining. They were taking a lot of action to promote the tools and so on, trying to explain how they use the data, but it didn't help", explained Vuori, who is of the opinion that this particular example of the improper adoption of AI and advanced technology is actually common practice. What's to be done? Ultimately, if employers are committed to developing and eventually capitalising on AI tools and technologies, then the research finds that they will have to ensure that the workforce is emotionally resilient and informed enough to withstand setbacks. According to the data, there were four distinct groups within the case study. Those who fully trusted the tech, those who fully distrusted it, those who held an uncomfortable trust and those who blindly trusted it. It was found that a failure to cohesively trust the tech is what led to employees withdrawing and manipulating their digital footprint and therefore skewing the results and further diminishing belief in the system. Essentially, transparency and a system that encourages feedback, constructive conversation and training could make all of the difference when attempting to introduce new and advanced processes. "AI adoption isn't just a technological challenge, it's a leadership one. Success hinges on understanding trust and addressing emotions, and making employees feel excited about using and experimenting with AI," says Vuori. "Without this human-centered approach and strategies that are tailored to address the needs of each group, even the smartest AI will fail to deliver on its potential." Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[4]
Employee trust in AI linked to performance and adoption rates
Many companies are making substantial investments in artificial intelligence (AI), which can enhance decision-making processes, foster innovation, increase productivity, and have other advantages. New research published in the Journal of Management Studies shows that company employees' perceptions of how well AI performs (cognitive trust) and feelings towards AI (emotional trust) vary, and that these perceptions can affect AI performance and adoption in organizations. Interviews with employees of a medium-sized software development firm revealed four different trust configurations: full trust (high cognitive/high emotional), full distrust (low cognitive/low emotional), uncomfortable trust (high cognitive/low emotional), and blind trust (low cognitive/high emotional). Employees exhibited distinct behaviors under these different trust configurations: some responded by detailing their digital footprints, while others engaged in manipulating, confining, or withdrawing them. These behaviors triggered a "vicious cycle," where biased and unbalanced data inputs degraded AI performance, further eroding trust and stalling adoption. The findings could provide insights into how managers should introduce AI into the workplace. "AI adoption isn't just a technological challenge -- it's a leadership one. Success hinges on understanding trust, addressing emotions, and meeting employees where they are," said corresponding author Natalia Vuori, DSc, of Aalto University, in Finland. "Without this human-centered approach, even the smartest AI will fail to deliver on its promise."
Share
Share
Copy Link
A study reveals that employee trust in AI, both cognitive and emotional, significantly impacts its performance and adoption in companies. Despite heavy investments, 80% of firms fail to benefit from AI due to human factors rather than technological limitations.
Recent research has unveiled a critical factor in the successful implementation of Artificial Intelligence (AI) in companies: employee trust. Despite substantial investments in AI technology, a staggering 80% of companies report failing to reap the expected benefits 1. The study, led by Assistant Professor Natalia Vuori from Aalto University, suggests that the key to successful AI adoption lies not in the technology itself, but in understanding and managing employees' emotional and behavioral reactions to AI 2.
Vuori's research team followed a consulting company of 600 employees for over a year as they attempted to implement a new AI tool. The tool was designed to map employees' skills and abilities by collecting their digital footprints, with the aim of streamlining team selection for consulting projects 1.
However, the project failed after almost two years. Despite some staff believing in the tool's performance, many were uncomfortable with the AI monitoring their calendar notes, internal communications, and daily activities. This discomfort led to employees either withholding information or manipulating the system, resulting in increasingly inaccurate AI output 2.
The research identified four distinct groups based on employees' reactions to the new technology:
Importantly, the study found that emotional trust played a crucial role in AI adoption, even when cognitive trust (belief in the technology's performance) was high 3.
The less employees trusted the tool emotionally, the more they restricted, withdrew, or manipulated their digital footprint. This behavior led to biased and unbalanced data inputs, degrading AI performance and further eroding trust, creating a vicious cycle that stalled adoption 4.
The findings have significant implications for the estimated 266 million organizations worldwide that are using or experimenting with AI, representing about 82% of companies globally 3.
Vuori emphasizes that successful AI adoption is not just a technological challenge but a leadership one. "Success hinges on understanding trust and addressing emotions, and making employees feel excited about using and experimenting with AI," she states 1.
To capitalize on AI's potential, companies need to adopt a human-centered approach. This includes:
By focusing on these aspects, organizations can create an environment where employees feel comfortable and excited about AI integration, ultimately unlocking its full potential 3.
Reference
[1]
[3]
As AI continues to reshape the business landscape, leaders are exploring its potential in learning, development, and human interaction. While AI offers numerous benefits, experts emphasize the importance of maintaining trust, inclusivity, and human-centric approaches in its implementation.
5 Sources
5 Sources
Recent research reveals a growing disconnect between executive enthusiasm for AI and employee hesitation, with workers hiding AI use due to fears of being perceived as lazy or incompetent.
4 Sources
4 Sources
A recent study explores the impact of AI assistants like Microsoft Viva Insights on workplace productivity and well-being, highlighting both benefits and limitations of these tools.
2 Sources
2 Sources
As AI technology advances, businesses and users face challenges with accuracy and reliability. Experts suggest ways to address gaps in AI performance and human expertise to maximize AI's potential.
2 Sources
2 Sources
A new study reveals that trust in AI for financial advice depends on factors such as gender, political affiliation, and prior AI knowledge, highlighting the challenges in integrating AI into the financial sector.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved