2 Sources
[1]
New study reveals why experts feel deeply insulted when clients trust AI
Consult AI, but keep it discreet or lose a vital professional relationship * Study finds professionals feel disrespected when clients compare their expertise with AI-generated answers * Advisors become less motivated after losing clients to AI-powered recommendations online * Clients using AI fact checks may appear less trustworthy to professionals afterward A new study from Monash Business School has claimed professional advisors feel offended when clients use AI to get a second opinion on their recommendations. The research, published in Computers in Human Behaviour, found professionals become less motivated to work with clients who consult AI tools. This effect persists even when the client only uses AI for background information, or as a complementary resource rather than a replacement. Human experts feel insulted by AI fact-checking "Advisors view AI as substantially inferior to themselves; thus, being placed in the same category as an AI system feels insulting and signals disrespect, undermining advisors' willingness to engage," Associate Professor Gerri Spassova, the lead author, said. Imagine spending an hour helping a client plan a complex trip, carefully mapping out flights, hotels, and itineraries -- only for that client to take your recommendations and book everything through an AI chatbot instead. Researchers found professionals who lost business to an AI were far less willing to work with that client again in the future. Clients who consult AI may be seen as less competent and less warm by the advisors they approach for help. When clients defer to AI, it prompts advisors to question the value of their own human contribution, and this may get worse as AI gets better. Many advisors take offense at this, and it is the major reason why they pull back from clients who consult AI. "One can only speculate," Associate Professor Spassova said. "My intuition is that the situation will not get much better. Firstly, because professional advisors' jobs are on the line. "Also, as AI gets better, it may threaten our sense of worth and self-regard, and so when clients defer to AI, it would prompt advisors to question the value of their human contribution." Discretely consult AI tools if you must The study suggests for new client advisor relationships, people should not disclose that they consulted AI before the meeting. A long history of working together might weaken the negative reaction, but even then, the advisor may still feel cheated. This applies to doctors, lawyers, and other professionals whose expertise clients might fact-check with AI tools. A doctor who spent years training does not want to be second-guessed by a patient who spent five minutes on ChatGPT. AI tools usually give a general overview of a situation and are very likely to make mistakes. Its judgment is highly dependent on the amount of information you supply, and if you are not detailed enough, its response can be misleading. Also, AI gives responses to questions based on the way it is asked, and users can easily influence an AI tool to tell them what they want to hear. Considering these nuances, it would be unfair to judge a professional with years of study and experience based on an uncertain tool. There is absolutely no need to throw it in the face of a professional that you have consulted AI because it creates a sense of "lack of trust". Until professional norms adjust to the presence of AI, clients would be wise to keep their fact checking private or risk damaging professional relationships. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.
[2]
Professionals feel disrespected when clients fact-check them with AI, study says
A study from Monash Business School found that professionals feel disrespected when clients use AI tools like ChatGPT to fact-check their expertise. Published in the journal Computers in Human Behaviour, the research indicates that this practice diminishes the motivation of professionals to engage with clients. The study reveals that even minimal use of AI, such as for background information, can negatively impact the advisor-client relationship. "Advisors view AI as substantially inferior to themselves; thus, being placed in the same category as an AI system feels insulting," said Associate Professor Gerri Spassova, the lead author of the study. Researchers noted a significant decline in advisors' willingness to work with clients who have sought AI-driven recommendations. Many professionals felt that clients who consult AI appear less competent and trustworthy. A particularly illustrative scenario involved a client disregarding an advisor's careful planning by choosing to book through an AI chatbot instead. According to Spassova, this offense leads to advisors questioning their own value in the face of AI advancements. "When clients defer to AI, it prompts advisors to question the value of their own human contribution," Spassova stated. This concern may grow as AI continues to improve. The study suggests that clients should avoid disclosing AI consultations in new advisor relationships to mitigate negative reactions. While long-standing professional relationships may lessen these feelings, advisors can still feel slighted. The findings are pertinent for various fields, including healthcare and legal services, where professionals expect clients to rely on their expertise rather than AI. AI tools typically provide general overviews and can make errors based on limited user-generated information. The likelihood of misleading advice increases when clients tailor their queries to confirm existing beliefs. Consequently, this creates issues surrounding trust between professionals and their clients. Professionals are advised to maintain boundaries regarding AI consultations, suggesting that clients keep their use of these tools private until industry norms evolve. "It creates a sense of lack of trust," Spassova cautioned about openly discussing AI consultation with experts.
Share
Copy Link
A Monash Business School study published in Computers in Human Behaviour shows that professionals become less motivated to work with clients who use AI tools like ChatGPT for second opinions. Experts view being compared to AI as disrespectful, and the issue may worsen as AI capabilities advance.
A new Monash Business School study has uncovered a troubling dynamic in professional relationships: experts feel insulted when clients trust AI to verify their recommendations. Published in the journal Computers in Human Behaviour, the research reveals that professional advisors become less motivated to engage with clients who use AI tools like ChatGPT for second opinions
1
2
. This effect persists even when clients use AI for background information rather than as a direct replacement for human expertise."Advisors view AI as substantially inferior to themselves; thus, being placed in the same category as an AI system feels insulting and signals disrespect, undermining advisors' willingness to engage," explained Associate Professor Gerri Spassova, the lead author of the study
1
. The research found that when clients fact-check them with AI, professionals perceive it as a direct challenge to their years of training and experience. Imagine a scenario where an advisor spends an hour carefully planning a complex trip, mapping out flights, hotels, and itineraries, only for the client to take those recommendations and book everything through an AI chatbot instead1
. Researchers found that professionals who lost business to AI recommendations were far less willing to work with that client again in the future.
Source: TechRadar
The study demonstrates that clients who consult AI may appear less competent and less trustworthy to the advisors they approach for help
1
. When clients use AI for second opinions, it prompts advisors to question the value of their own human contribution, and this dynamic may intensify as AI capabilities advance2
. Many advisors take offense at this comparison, which becomes the major reason they pull back from professional relationships. This applies across multiple fields, including doctors, lawyers, and other professionals whose expertise clients might verify with AI tools1
. A doctor who spent years training does not want to be second-guessed by a patient who spent five minutes on ChatGPT.Related Stories
Associate Professor Spassova speculates that the situation may not improve as AI technology advances. "As AI gets better, it may threaten our sense of worth and self-regard, and so when clients defer to AI, it would prompt advisors to question the value of their human contribution," she noted
1
. Professional advisors' jobs are increasingly on the line, which heightens their sensitivity to being compared with AI systems1
. This concern underscores how AI fact-checking can undermine the perceived value of human expertise in ways that professionals find deeply offensive.The study suggests that clients should avoid disclosing AI consultations in new advisor relationships to prevent negative reactions
2
. While a long history of working together might weaken the negative response, even established professional relationships can suffer when advisors feel their expertise is being questioned1
. AI tools typically provide general overviews and are prone to making mistakes, with their accuracy highly dependent on the amount and quality of information supplied1
. Users can easily influence AI recommendations to confirm existing beliefs, making it unfair to judge professionals with years of study based on an uncertain tool2
. Until professional norms adjust to the presence of AI, clients would be wise to keep their fact-checking private or risk damaging vital professional relationships1
. Openly discussing AI consultation creates a sense of lack of trust that makes professionals less motivated to provide their best service2
.Summarized by
Navi
13 Aug 2025•Technology

26 Mar 2026•Science and Research

13 May 2025•Technology

1
Technology

2
Policy and Regulation

3
Policy and Regulation
