2 Sources
[1]
Why AI emails can quietly destroy trust at work
With over 75% of professionals using AI in their daily work, writing and editing messages with tools like ChatGPT, Gemini, Copilot or Claude has become a commonplace practice. While generative AI tools are seen to make writing easier, are they effective for communicating between managers and employees? A new study of 1,100 professionals reveals a critical paradox in workplace communications: AI tools can make managers' emails more professional, but regular use can undermine trust between them and their employees. "We see a tension between perceptions of message quality and perceptions of the sender," said Anthony Coman, Ph.D., a researcher at the University of Florida's Warrington College of Business and study co-author. "Despite positive impressions of professionalism in AI-assisted writing, managers who use AI for routine communication tasks put their trustworthiness at risk when using medium- to high-levels of AI assistance." In the study published in the International Journal of Business Communication, Coman and his co-author, Peter Cardon, Ph.D., of the University of Southern California, surveyed professionals about how they viewed emails that they were told were written with low, medium and high AI assistance. Survey participants were asked to evaluate different AI-written versions of a congratulatory message on both their perception of the message content and their perception of the sender. While AI-assisted writing was generally seen as efficient, effective, and professional, Coman and Cardon found a "perception gap" in messages that were written by managers versus those written by employees. "When people evaluate their own use of AI, they tend to rate their use similarly across low, medium and high levels of assistance," Coman explained. "However, when rating other's use, magnitude becomes important. Overall, professionals view their own AI use leniently, yet they are more skeptical of the same levels of assistance when used by supervisors." While low levels of AI help, like grammar or editing, were generally acceptable, higher levels of assistance triggered negative perceptions. The perception gap is especially significant when employees perceive higher levels of AI writing, bringing into question the authorship, integrity, caring and competency of their manager. The impact on trust was substantial: Only 40% to 52% of employees viewed supervisors as sincere when they used high levels of AI, compared to 83% for low-assistance messages. Similarly, while 95% found low-AI supervisor messages professional, this dropped to 69-73% when supervisors relied heavily on AI tools. The findings reveal employees can often detect AI-generated content and interpret its use as laziness or lack of caring. When supervisors rely heavily on AI for messages like team congratulations or motivational communications, employees perceive them as less sincere and question their leadership abilities. "In some cases, AI-assisted writing can undermine perceptions of traits linked to a supervisor's trustworthiness," Coman noted, specifically citing impacts on perceived ability and integrity, both key components of cognitive-based trust. The study suggests managers should carefully consider message type, level of AI assistance and relational context before using AI in their writing. While AI may be appropriate and professionally received for informational or routine communications, like meeting reminders or factual announcements, relationship-oriented messages requiring empathy, praise, congratulations, motivation or personal feedback are better handled with minimal technological intervention.
[2]
Using AI at work could cost you your colleagues' trust, study finds
People are harsher judges when their bosses use AI to help write emails than when they use it themselves, a study has found. If you are a boss using artificial intelligence (AI) to communicate with your employees, beware: your professional credibility may be at stake, a new study has found. AI is rapidly making its way into the workplace. In the European Union, 13.5 per cent of businesses with at least 10 employees reported using AI last year, compared to 8 per cent in 2023, according to Eurostat. And a 2024 survey by Microsoft and LinkedIn found that 75 per cent of knowledge professionals around the world used generative AI at work. While this shift can help boost workers' productivity, it can also come with downsides - for example, hurting relationships between colleagues. For the new study, which was published in the International Journal of Business Communication, researchers surveyed more than 1,000 full-time professionals in the United States to understand how they perceived emails written with low, medium, or high levels of AI assistance. They were given scenarios where they were randomly shown one email, which they were told was written either by themselves or by their supervisor, and asked to rate it on traits such as professionalism, effectiveness, sincerity, and how caring it was. While AI-assisted messages were generally considered efficient, effective, and professional, the study highlighted a perception gap in messages written by employees and managers. "Overall, professionals view their own AI use leniently, yet they are more sceptical of the same levels of assistance when used by supervisors," Anthony Coman, a researcher at the University of Florida and one of the study's authors, said in a statement. In other words, people tend to judge AI use more strictly when it comes from their manager than when they use it themselves. This difference was especially significant if a supervisor's message relied more heavily on AI, going beyond simple grammar, proofreading, or editing. Only 40 per cent of employees viewed supervisors as sincere when they used high levels of AI, compared to 83 per cent for low-assistance messages, the study found. "In some cases, AI-assisted writing can undermine perceptions of traits linked to a supervisor's trustworthiness," Coman said. However, staffers' perceptions also appeared to vary depending on the purpose of the message. If the email was seen as purely informative, employees tended to view AI use positively. But if it was perceived as relationship-based or motivational, they were far less accepting. The study has some limitations. The researchers drew conclusions based on one hypothetical scenario, and people's perceptions could also be biased by the power dynamic between employee and supervisor. Even so, the results are in line with other findings. Researchers have pointed out how the use of AI in professional settings can hinder one's reputation, while others recently found that people who admit they use AI at work can lose their colleagues' trust.
Share
Copy Link
A new study finds that while AI tools can enhance email professionalism, their regular use by managers can significantly undermine trust with employees, especially in relationship-oriented communications.
In an era where over 75% of professionals use AI tools like ChatGPT, Gemini, Copilot, or Claude in their daily work 1, a new study has uncovered a critical paradox in workplace communications. While AI can enhance the professionalism of emails, its regular use by managers can significantly undermine trust with their employees 12.
Source: euronews
The study, published in the International Journal of Business Communication, surveyed 1,100 professionals about their perceptions of emails written with varying levels of AI assistance. Researchers Anthony Coman from the University of Florida and Peter Cardon from the University of Southern California found a notable "perception gap" between how people view their own AI use versus that of their supervisors 1.
"Overall, professionals view their own AI use leniently, yet they are more skeptical of the same levels of assistance when used by supervisors," explained Coman 2. This gap widens as the level of AI assistance increases, particularly when employees perceive higher levels of AI writing in their managers' communications 1.
The study revealed a substantial impact on trust:
These findings suggest that employees can often detect AI-generated content and may interpret its extensive use as laziness or lack of caring, especially in relationship-oriented communications 12.
The perception of AI use varies depending on the purpose of the message. Employees tend to view AI use positively for purely informative emails. However, for relationship-based or motivational communications, they are far less accepting 2. This distinction highlights the importance of context in AI-assisted communication.
The study's findings have significant implications for workplace dynamics and leadership perception. When supervisors rely heavily on AI for messages like team congratulations or motivational communications, employees may question their sincerity and leadership abilities 1.
Source: ScienceDaily
"In some cases, AI-assisted writing can undermine perceptions of traits linked to a supervisor's trustworthiness," noted Coman, specifically citing impacts on perceived ability and integrity, both key components of cognitive-based trust 12.
Given these findings, the researchers suggest that managers should carefully consider the type of message, level of AI assistance, and relational context before using AI in their writing 1. While AI may be appropriate for informational or routine communications, relationship-oriented messages requiring empathy, praise, or personal feedback are better handled with minimal technological intervention 1.
As AI continues to integrate into workplace communication, understanding its impact on interpersonal dynamics becomes crucial. This study serves as a reminder that while AI can enhance efficiency, its use must be balanced with maintaining authentic human connections in professional relationships.
Summarized by
Navi
[1]
NVIDIA CEO Jensen Huang confirms the development of the company's most advanced AI architecture, 'Rubin', with six new chips currently in trial production at TSMC.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago
Databricks, a leading data and AI company, is set to acquire machine learning startup Tecton to bolster its AI agent offerings. This strategic move aims to improve real-time data processing and expand Databricks' suite of AI tools for enterprise customers.
3 Sources
Technology
17 hrs ago
3 Sources
Technology
17 hrs ago
Google is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.
3 Sources
Technology
9 hrs ago
3 Sources
Technology
9 hrs ago
Broadcom's stock rises as the company capitalizes on the AI boom, driven by massive investments from tech giants in data infrastructure. The chipmaker faces both opportunities and challenges in this rapidly evolving landscape.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago
Apple is set to introduce new enterprise-focused AI tools, including ChatGPT configuration options and potential support for other AI providers, as part of its upcoming software updates.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago