Study Reveals People Trust ChatGPT's Legal Advice More Than Human Lawyers

2 Sources

Share

A new study finds that non-experts are more likely to rely on legal advice from ChatGPT than from human lawyers, raising concerns about AI literacy and the need for proper regulation.

News article

Study Reveals Surprising Trust in AI-Generated Legal Advice

A groundbreaking study has uncovered a concerning trend in public trust towards AI-generated legal advice. Researchers found that people without legal expertise are more inclined to rely on advice from ChatGPT, an AI language model, than from human lawyers – particularly when the source of the advice is not disclosed

1

2

.

Key Findings of the Research

The study, conducted across three experiments involving 288 participants, revealed several important insights:

  1. When the source of legal advice was not disclosed, participants showed a greater willingness to act on AI-generated advice compared to advice from human lawyers.

  2. Even when participants were informed about the source of the advice, they were equally likely to follow ChatGPT's recommendations as those from a lawyer.

  3. Participants demonstrated a slight ability to distinguish between AI and human-generated content, but this ability was only marginally better than random guessing

    1

    2

    .

Factors Influencing AI Preference

The researchers identified potential reasons for the preference towards AI-generated advice:

  1. Language Complexity: ChatGPT tends to use more complex language, which may be perceived as more authoritative or knowledgeable.

  2. Confidence in Delivery: AI models often present information with high confidence, making it challenging for users to discern between accurate and potentially flawed advice

    1

    2

    .

Implications and Risks

This trend raises significant concerns, particularly in high-stakes domains like law:

  1. Misinformation Risk: AI models are known to produce "hallucinations" – inaccurate or nonsensical content – which could lead to serious consequences if acted upon without verification.

  2. Overreliance on AI: The public's willingness to trust AI-generated advice could result in neglecting human expertise and critical thinking

    1

    2

    .

Call for Regulation and AI Literacy

The study's authors emphasize the need for a two-pronged approach to address these challenges:

  1. AI Regulation: Initiatives like the EU AI Act are crucial in ensuring transparency in AI-generated content. Article 50.9 of the act requires AI-generated text to be clearly marked and detectable.

  2. Improving AI Literacy: The public needs to develop better skills in critically assessing AI-generated content, understanding its limitations, and recognizing the importance of human expertise

    1

    2

    .

Responsible Use of AI in Legal Contexts

While AI can be a valuable tool for initial legal inquiries, the researchers stress the importance of verifying any AI-generated advice with human lawyers before taking significant actions. This approach allows for harnessing the benefits of AI while mitigating potential risks

1

2

.

As AI continues to integrate into various aspects of daily life, from home assistants to complex task management, the need for responsible use and critical evaluation of AI-generated content becomes increasingly crucial. This study serves as a timely reminder of the challenges and opportunities presented by AI in professional domains like law.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo