Professional advisors feel insulted when clients use AI fact-checking, new study reveals

Reviewed byNidhi Govil

2 Sources

Share

A Monash Business School study published in Computers in Human Behaviour shows that professionals become less motivated to work with clients who use AI tools like ChatGPT for second opinions. Experts view being compared to AI as disrespectful, and the issue may worsen as AI capabilities advance.

Professional Advisors React Negatively to AI Fact-Checking

A new Monash Business School study has uncovered a troubling dynamic in professional relationships: experts feel insulted when clients trust AI to verify their recommendations. Published in the journal Computers in Human Behaviour, the research reveals that professional advisors become less motivated to engage with clients who use AI tools like ChatGPT for second opinions

1

2

. This effect persists even when clients use AI for background information rather than as a direct replacement for human expertise.

Why Professionals Feel Disrespected by Client AI Use

"Advisors view AI as substantially inferior to themselves; thus, being placed in the same category as an AI system feels insulting and signals disrespect, undermining advisors' willingness to engage," explained Associate Professor Gerri Spassova, the lead author of the study

1

. The research found that when clients fact-check them with AI, professionals perceive it as a direct challenge to their years of training and experience. Imagine a scenario where an advisor spends an hour carefully planning a complex trip, mapping out flights, hotels, and itineraries, only for the client to take those recommendations and book everything through an AI chatbot instead

1

. Researchers found that professionals who lost business to AI recommendations were far less willing to work with that client again in the future.

Source: TechRadar

Source: TechRadar

How AI Fact-Checking Damages Advisor-Client Relationships

The study demonstrates that clients who consult AI may appear less competent and less trustworthy to the advisors they approach for help

1

. When clients use AI for second opinions, it prompts advisors to question the value of their own human contribution, and this dynamic may intensify as AI capabilities advance

2

. Many advisors take offense at this comparison, which becomes the major reason they pull back from professional relationships. This applies across multiple fields, including doctors, lawyers, and other professionals whose expertise clients might verify with AI tools

1

. A doctor who spent years training does not want to be second-guessed by a patient who spent five minutes on ChatGPT.

The Growing Tension as AI Capabilities Improve

Associate Professor Spassova speculates that the situation may not improve as AI technology advances. "As AI gets better, it may threaten our sense of worth and self-regard, and so when clients defer to AI, it would prompt advisors to question the value of their human contribution," she noted

1

. Professional advisors' jobs are increasingly on the line, which heightens their sensitivity to being compared with AI systems

1

. This concern underscores how AI fact-checking can undermine the perceived value of human expertise in ways that professionals find deeply offensive.

Practical Implications for Client Behavior

The study suggests that clients should avoid disclosing AI consultations in new advisor relationships to prevent negative reactions

2

. While a long history of working together might weaken the negative response, even established professional relationships can suffer when advisors feel their expertise is being questioned

1

. AI tools typically provide general overviews and are prone to making mistakes, with their accuracy highly dependent on the amount and quality of information supplied

1

. Users can easily influence AI recommendations to confirm existing beliefs, making it unfair to judge professionals with years of study based on an uncertain tool

2

. Until professional norms adjust to the presence of AI, clients would be wise to keep their fact-checking private or risk damaging vital professional relationships

1

. Openly discussing AI consultation creates a sense of lack of trust that makes professionals less motivated to provide their best service

2

.

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved