The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Tue, 29 Apr, 12:01 AM UTC
2 Sources
[1]
People trust legal advice generated by ChatGPT more than a lawyer - new study
People who aren't legal experts are more willing to rely on legal advice provided by ChatGPT than by real lawyers - at least, when they don't know which of the two provided the advice. That's the key finding of our new research, which highlights some important concerns about the way the public increasingly relies on AI-generated content. We also found the public has at least some ability to identify whether the advice came from ChatGPT or a human lawyer. AI tools like ChatGPT and other large language models (LLMs) are making their way into our everyday life. They promise to provide quick answers, generate ideas, diagnose medical symptoms, and even help with legal questions by providing concrete legal advice. But LLMs are known to create so-called "hallucinations" - that is, outputs containing inaccurate or nonsensical content. This means there is a real risk associated with people relying on them too much, particularly in high-stakes domains such as law. LLMs tend to present advice confidently, making it difficult for people to distinguish good advice from decisively voiced bad advice. We ran three experiments on a total of 288 people. In the first two experiments, participants were given legal advice and asked which they would be willing to act on. When people didn't know if the advice had come from a lawyer or an AI, we found they were more willing to rely on the AI-generated advice. This means that if an LLM gives legal advice without disclosing its nature, people may take it as fact and prefer it to expert advice by lawyers - possibly without questioning its accuracy. Even when participants were told which advice came from a lawyer and which was AI-generated, we found they were willing to follow ChatGPT just as much as the lawyer. One reason LLMs may be favoured, as we found in our study, is that they use more complex language. On the other hand, real lawyers tended to use simpler language but use more words in their answers. The third experiment investigated whether participants could distinguish between LLM and lawyer-generated content when the source is not revealed to them. The good news is they can - but not by very much. In our task, random guessing would have produced a score of 0.5, while perfect discrimination would have produced a score of 1.0. On average, participants scored 0.59, indicating performance that was slightly better than random guessing, but still relatively weak Regulation and AI literacy This is a crucial moment for research like ours, as AI-powered systems such as chatbots and LLMs are becoming increasingly integrated into everyday life. Alexa or Google Home can act as a home assistant, while AI-enabled systems can help with complex tasks such as online shopping, summarising legal texts, or generating medical records. Yet this comes with significant risks of making potentially life altering decisions that are guided by hallucinated misinformation. In the legal case, AI-generated, hallucinated advice could cause unnecessary complications or even miscarriages of justice. That's why it has never been more important to properly regulate AI. Attempts so far include the EU AI Act, article 50.9 of which states that text-generating AIs should ensure their outputs are "marked in a machine-readable format and detectable as artificially generated or manipulated". But this is only part of the solution. We'll also need to improve AI literacy so that the public is better able to critically assess content. When people are better able to recognise AI they'll be able to make more informed decisions. This means that we need to learn to question the source of advice, understand the capabilities and limitations of AI, and emphasise the use of critical thinking and common sense when interacting with AI-generated content. In practical terms, this means cross-checking important information with trusted sources and including human experts to prevent overreliance on AI-generated information. In the case of legal advice, it may be fine to use AI for some initial questions: "What are my options here? What do I need to read up on? Are there any similar cases to mine, or what area of law is this?" But it's important to verify the advice with a human lawyer long before ending up in court or acting upon anything generated by an LLM. AI can be a valuable tool, but we must use it responsibly. By using a two-pronged approach which focuses on regulation and AI literacy, we can harness its benefits while minimising its risks. Read more: We asked ChatGPT for legal advice - here are five reasons why you shouldn't
[2]
People trust legal advice generated by ChatGPT more than a lawyer -- new study
by Eike Schneiders, Joshua Krook and Tina Seabrooke, The Conversation People who aren't legal experts are more willing to rely on legal advice provided by ChatGPT than by real lawyers -- at least, when they don't know which of the two provided the advice. That's the key finding of our new research, which highlights some important concerns about the way the public increasingly relies on AI-generated content. We also found the public has at least some ability to identify whether the advice came from ChatGPT or a human lawyer. AI tools like ChatGPT and other large language models (LLMs) are making their way into our everyday life. They promise to provide quick answers, generate ideas, diagnose medical symptoms, and even help with legal questions by providing concrete legal advice. But LLMs are known to create so-called "hallucinations" -- that is, outputs containing inaccurate or nonsensical content. This means there is a real risk associated with people relying on them too much, particularly in high-stakes domains such as law. LLMs tend to present advice confidently, making it difficult for people to distinguish good advice from decisively voiced bad advice. We ran three experiments on a total of 288 people. In the first two experiments, participants were given legal advice and asked which they would be willing to act on. When people didn't know if the advice had come from a lawyer or an AI, we found they were more willing to rely on the AI-generated advice. This means that if an LLM gives legal advice without disclosing its nature, people may take it as fact and prefer it to expert advice by lawyers -- possibly without questioning its accuracy. Even when participants were told which advice came from a lawyer and which was AI-generated, we found they were willing to follow ChatGPT just as much as the lawyer. One reason LLMs may be favored, as we found in our study, is that they use more complex language. On the other hand, real lawyers tended to use simpler language but use more words in their answers. The third experiment investigated whether participants could distinguish between LLM and lawyer-generated content when the source is not revealed to them. The good news is they can -- but not by very much. In our task, random guessing would have produced a score of 0.5, while perfect discrimination would have produced a score of 1.0. On average, participants scored 0.59, indicating performance that was slightly better than random guessing, but still relatively weak. Regulation and AI literacy This is a crucial moment for research like ours, as AI-powered systems such as chatbots and LLMs are becoming increasingly integrated into everyday life. Alexa or Google Home can act as a home assistant, while AI-enabled systems can help with complex tasks such as online shopping, summarizing legal texts, or generating medical records. Yet this comes with significant risks of making potentially life-altering decisions that are guided by hallucinated misinformation. In the legal case, AI-generated, hallucinated advice could cause unnecessary complications or even miscarriages of justice. That's why it has never been more important to properly regulate AI. Attempts so far include the EU AI Act, article 50.9 of which states that text-generating AIs should ensure their outputs are "marked in a machine-readable format and detectable as artificially generated or manipulated". But this is only part of the solution. We'll also need to improve AI literacy so that the public is better able to critically assess content. When people are better able to recognize AI they'll be able to make more informed decisions. This means that we need to learn to question the source of advice, understand the capabilities and limitations of AI, and emphasize the use of critical thinking and common sense when interacting with AI-generated content. In practical terms, this means cross-checking important information with trusted sources and including human experts to prevent overreliance on AI-generated information. In the case of legal advice, it may be fine to use AI for some initial questions: "What are my options here? What do I need to read up on? Are there any similar cases to mine, or what area of law is this?" But it's important to verify the advice with a human lawyer long before ending up in court or acting upon anything generated by an LLM. AI can be a valuable tool, but we must use it responsibly. By using a two-pronged approach which focuses on regulation and AI literacy, we can harness its benefits while minimizing its risks.
Share
Share
Copy Link
A new study finds that non-experts are more likely to rely on legal advice from ChatGPT than from human lawyers, raising concerns about AI literacy and the need for proper regulation.
A groundbreaking study has uncovered a concerning trend in public trust towards AI-generated legal advice. Researchers found that people without legal expertise are more inclined to rely on advice from ChatGPT, an AI language model, than from human lawyers – particularly when the source of the advice is not disclosed 12.
The study, conducted across three experiments involving 288 participants, revealed several important insights:
When the source of legal advice was not disclosed, participants showed a greater willingness to act on AI-generated advice compared to advice from human lawyers.
Even when participants were informed about the source of the advice, they were equally likely to follow ChatGPT's recommendations as those from a lawyer.
Participants demonstrated a slight ability to distinguish between AI and human-generated content, but this ability was only marginally better than random guessing 12.
The researchers identified potential reasons for the preference towards AI-generated advice:
Language Complexity: ChatGPT tends to use more complex language, which may be perceived as more authoritative or knowledgeable.
Confidence in Delivery: AI models often present information with high confidence, making it challenging for users to discern between accurate and potentially flawed advice 12.
This trend raises significant concerns, particularly in high-stakes domains like law:
Misinformation Risk: AI models are known to produce "hallucinations" – inaccurate or nonsensical content – which could lead to serious consequences if acted upon without verification.
Overreliance on AI: The public's willingness to trust AI-generated advice could result in neglecting human expertise and critical thinking 12.
The study's authors emphasize the need for a two-pronged approach to address these challenges:
AI Regulation: Initiatives like the EU AI Act are crucial in ensuring transparency in AI-generated content. Article 50.9 of the act requires AI-generated text to be clearly marked and detectable.
Improving AI Literacy: The public needs to develop better skills in critically assessing AI-generated content, understanding its limitations, and recognizing the importance of human expertise 12.
While AI can be a valuable tool for initial legal inquiries, the researchers stress the importance of verifying any AI-generated advice with human lawyers before taking significant actions. This approach allows for harnessing the benefits of AI while mitigating potential risks 12.
As AI continues to integrate into various aspects of daily life, from home assistants to complex task management, the need for responsible use and critical evaluation of AI-generated content becomes increasingly crucial. This study serves as a timely reminder of the challenges and opportunities presented by AI in professional domains like law.
Reference
[1]
Computer science professors from Carnegie Mellon University offer insights on effectively using generative AI tools while avoiding common pitfalls and maintaining safety.
2 Sources
2 Sources
US law firms are increasingly adopting AI technologies to enhance efficiency and competitiveness, while navigating complex ethical and practical challenges. This trend is reshaping legal practices and education.
7 Sources
7 Sources
A new study finds that ChatGPT, while excelling at logic and math, displays many of the same cognitive biases as humans when making subjective decisions, raising concerns about AI's reliability in high-stakes decision-making processes.
3 Sources
3 Sources
Morgan & Morgan, a major US law firm, warns its attorneys about the risks of using AI-generated content in court filings after a case involving fake citations. The incident highlights growing concerns about AI use in the legal profession.
9 Sources
9 Sources
A groundbreaking study reveals that ChatGPT's responses in couple's therapy scenarios are rated higher than those of human therapists, raising questions about AI's potential role in mental health care.
2 Sources
2 Sources