3 Sources
3 Sources
[1]
Proposed New York law would bar AI chatbots from posing as lawyers, allow duped users to sue
March 5 (Reuters) - A proposed law working its way through New York's legislature would bar artificial intelligence chatbots from impersonating lawyers and other licensed professionals in the state, opening up AI platforms to lawsuits by users. The bill, opens new tab, whose sponsor called it the first of its kind in the country, would bar AI chatbots from giving substantive responses and offering advice that "if taken by a natural person" would constitute the unauthorized practice of law. "Today, there is no law that says that a large language model cannot tell you that it is a lawyer, that it is a licensed therapist, and then give you legal advice or therapy accordingly," New York State Senator and bill sponsor Kristen Gonzalez told Reuters Thursday. "I think that's really concerning." Chatbot users should be able to sue if they rely on erroneous legal information provided by a platform that represents itself as a lawyer, Gonzalez said. AI platforms under the bill would not be able to avoid liability by notifying users that they are interacting with a "non-human chatbot," and users could seek damages in court against companies that violate the law. New York, like all U.S. states, prohibits people from representing themselves as lawyers or offering legal services without being licensed to practice law. The bill would apply to law and other licensed professions such as doctors and mental health providers. It is part of a larger suite of New York bills seeking to regulate AI, including one that would protect minors from unsafe AI chatbot features and one that would require AI platforms to "conspicuously" display a notice that outputs may be inaccurate. OpenAI and Anthropic, which operate two of the most popular AI chatbots, did not immediately respond to requests on Thursday for comment on the proposed professional impersonation law. The bill, which advanced out of the New York Senate's Internet and Technology Committee late last month, comes as AI platforms face mounting scrutiny over the impacts and ethics of the rapidly expanding technology. ChatGPT maker OpenAI, Google's Gemini, and Character.AI are each facing lawsuits alleging that the tools led to users' suicides. The companies have denied wrongdoing but settled some cases. Debates are also growing over the technology's use in law. Nippon Life Insurance Company of America sued OpenAI on Wednesday, accusing ChatGPT of practicing law without a license and helping a former disability claimant breach a settlement and flood a federal court docket with meritless filings. OpenAI said the case lacks merit. A growing number of lawyers have separately faced court sanctions for submitting briefs with AI-generated fictitious case citations and other hallucinated material, with some judges imposing fines. US appeals court orders lawyer to pay $2,500 over AI hallucinations in brief OpenAI hit with lawsuit claiming ChatGPT acted as an unlicensed lawyer Reporting by Karen Sloan Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Government * Constitutional Law * Judiciary * Consumer Protection * Data Privacy Karen Sloan Thomson Reuters Karen Sloan reports on law firms, law schools, and the business of law. Reach her at [email protected]
[2]
New York lawmakers move to block AI chatbots from giving legal or medical advice
The legislation would introduce lawsuits as a way to enforce limits on AI advice * A proposed New York bill would ban AI chatbots from providing legal or medical advice * The legislation would allow users to sue companies if their chatbots impersonate licensed professionals * Lawmakers say the measure is meant to protect the public as AI tools become more widely used AI chatbots have spent the past few years answering nearly every kind of question imaginable, but New York lawmakers are preparing to draw a firm line around at least a couple of categories of conversation. A bill advancing through the state legislature would prohibit AI chatbots from providing legal or medical advice and would allow users to sue the companies behind those systems if they cross that boundary. The proposal, Senate Bill S7263, would apply to AI chatbots that mimic or impersonate licensed professionals such as lawyers or physicians. The heart of the bill applies the same principle about how individuals cannot practice law or medicine without the appropriate licenses to AI. That rule is meant to ensure that people receive guidance from trained professionals who can be held accountable for their advice. If an AI chatbot responds in a way that effectively substitutes for licensed legal or medical advice, the developers could be in violation of the law. The bill, which includes other AI safety measures, recently passed out of the New York Senate's Internet and Technology Committee with unanimous support. Chatbot providers would also have to clearly inform users that they are interacting with an artificial intelligence system rather than a human professional. Even if a chatbot displays a warning that it is not a doctor or lawyer, that disclaimer would not protect the company from liability if the system still provides prohibited advice. But it's also part of a larger effort to regulate AI chatbots in New York. Other bills focus on protecting minors who interact with AI chatbots or strengthening transparency requirements for generative AI systems and synthetic media. "People deserve real care from real people," State Senator Kristen Gonzalez, who introduced the bill, said in a statement. "They deserve transparency, accountability, and the promise that their data is secure while utilizing technology." AI advice To enforce the law, individuals could file civil lawsuits against companies whose AI chatbots violate the rule. Users could seek damages and recover legal fees if they successfully prove that a chatbot provided unauthorized professional advice. When millions of people use AI chatbots for drafting emails and answering questions on topics ranging from cooking to tax policy, it's not surprising that many may treat AI answers as genuine advice. That is precisely the situation lawmakers hope to avoid in areas where mistakes could carry serious consequences. Educational explanations about general concepts would still be allowed. What lawmakers want to avoid is the scenario in which a chatbot confidently instructs someone how to treat a medical condition or interpret a legal contract. But there are always ambiguous situations. For instance, a chatbot might explain the symptoms of a medical condition by summarizing publicly available information. Yet the same explanation could influence a user's health decisions, making it resemble medical advice in practice. Despite those concerns, the broader trend toward regulating artificial intelligence appears unlikely to slow. AI's growing influence has prompted lawmakers to ask whether the technology should face rules similar to those that govern traditional professions. Technology regulation often spreads from one jurisdiction to another. Laws enacted in large states frequently become models for similar legislation elsewhere. So, for AI developers, the New York proposal offers a preview of the kinds of questions that governments will increasingly ask, and that they want AI chatbots not to answer. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[3]
New York lawmakers want AI chatbots to stop pretending to be doctors or lawyers
The bill applies to chatbots that give advice in the fields of medicine, law, dentistry, veterinary medicine, physical therapy, pharmacy, nursing, podiatry, optometry, engineering, land surveying, geology, architecture, psychology, and social work. New York isn't acting alone. Other states have passed or are considering similar laws, though with varying scopes and enforcement methods, and they tend to focus primarily on healthcare: * California's AB 489, enacted in 2025, does something similar but with a narrower scope, targeting AI systems that misrepresent their information as coming from licensed healthcare professionals. But AB 489 relies on state healthcare boards for enforcement and doesn't provide a private right of action (civil suit) for legal recourse. * A new Nevada law, AB 406, which went into effect last July, prohibits the advertising and operation of AI systems designed to dispense professional mental and behavioral healthcare therapy. The law also limits how licensed professionals can use AI in their practices. * Last August, Illinois passed HB 1806, which prohibits licensed therapists in the state from using AI to make treatment decisions or communicate with clients. The law also prohibits tech companies from advertising or offering AI-powered therapy services in the state without the involvement of a licensed professional. * Utah passed a similar law, HB 452, that puts restrictions and disclosure requirements on chatbots that appear to offer an alternative to human mental health therapy or advice. The law went into effect in 2025. Professional medical groups have also begun weighing in on the risks. The American Medical Association doesn't call for a ban on AI chatbots dispensing health information, but it worries that consumer advice from LLMs might be false or misleading. "Notably, tools such as ChatGPT have shown a not-uncommon tendency to falsify references cited in response to these queries," the AMA says in a policy paper, adding that AI tools have demonstrated the ability to generate fraudulent scientific or medical literature to support health advice. An especially sensitive area is mental health advice. Mental health advice is a particularly sensitive area, perhaps because many chatbot users, especially younger ones, use AI as a counselor or therapist. A 2025 JAMA Network study found that 13% of all respondents used chatbots for mental health advice, with 22% of those ages 18 to 21 doing so.
Share
Share
Copy Link
New York lawmakers are advancing legislation that would prohibit AI chatbots from impersonating licensed professionals like lawyers and doctors. Senate Bill S7263 allows users to sue companies whose chatbots provide unauthorized legal or medical advice, marking what sponsors call the first law of its kind in the country.
New York legislators are pushing forward with groundbreaking legislation designed to prevent AI chatbots from masquerading as licensed professionals and providing advice that could have serious consequences for users. Senate Bill S7263, which recently passed out of the New York Senate's Internet and Technology Committee with unanimous support, would specifically bar AI chatbots from giving substantive responses and offering advice that would constitute the unauthorized practice of law or medicine if delivered by a human
1
. The bill represents what its sponsor, State Senator Kristen Gonzalez, describes as the first of its kind in the country1
.
Source: Fast Company
The proposed New York law would create a private right of action, allowing chatbot users to sue companies if they rely on erroneous legal information provided by a platform that represents itself as a lawyer. "Today, there is no law that says that a large language model cannot tell you that it is a lawyer, that it is a licensed therapist, and then give you legal advice or therapy accordingly," Gonzalez told Reuters. "I think that's really concerning"
1
. Critically, AI platforms would not be able to avoid liability simply by notifying users that they are interacting with a non-human chatbot, and users could seek damages in court against companies that violate the law1
. This enforcement mechanism through civil lawsuits distinguishes the New York approach from some other state efforts.The legislation extends far beyond just lawyers and doctors. The bill applies to chatbots that give advice in the fields of medicine, law, dentistry, veterinary medicine, physical therapy, pharmacy, nursing, podiatry, optometry, engineering, land surveying, geology, architecture, psychology, and social work
3
. This comprehensive approach to regulate AI reflects growing concerns about consumer protection as these tools become more widely integrated into daily life. The measure is part of a larger suite of New York bills seeking to address AI safety, including one that would protect minors from unsafe AI chatbot features and another requiring AI platforms to "conspicuously" display a notice that outputs may be inaccurate1
.The timing of this legislation coincides with mounting scrutiny over the impacts and ethical implications of rapidly expanding AI technology. ChatGPT maker OpenAI, Google's Gemini, and Character.AI are each facing lawsuits alleging that the tools led to users' suicides, though the companies have denied wrongdoing and settled some cases
1
. Just this week, Nippon Life Insurance Company of America sued OpenAI, accusing ChatGPT of practicing law without a license1
. A growing number of lawyers have also faced court sanctions for submitting briefs with AI-generated fictitious case citations and other AI hallucinations, with some judges imposing fines1
.Mental health advice from AI chatbots has emerged as an especially sensitive area, particularly because many users, especially younger ones, turn to AI as a counselor or therapist. A 2025 JAMA Network study found that 13% of all respondents used chatbots for mental health advice, with 22% of those ages 18 to 21 doing so
3
. The American Medical Association has expressed concerns that consumer advice from large language models might be false or misleading, noting that "tools such as ChatGPT have shown a not-uncommon tendency to falsify references cited in response to these queries"3
.
Source: TechRadar
Related Stories
New York isn't acting alone in its efforts to regulate AI chatbots impersonating licensed professionals. California's AB 489, enacted in 2025, targets AI systems that misrepresent their information as coming from licensed healthcare professionals, though it relies on state healthcare boards for enforcement and doesn't provide a private right of action
3
. Nevada's AB 406, which went into effect last July, prohibits the advertising and operation of AI systems designed to dispense professional mental and behavioral healthcare therapy3
. Illinois passed HB 1806 last August, prohibiting licensed therapists from using AI to make treatment decisions or communicate with clients3
. Utah also enacted HB 452, putting restrictions and disclosure requirements on chatbots that appear to offer an alternative to human mental health therapy3
.For AI developers, the New York proposal offers a preview of the kinds of questions that governments will increasingly ask about transparency and accountability. Major AI companies like OpenAI and Anthropic, which operate two of the most popular AI chatbots, did not immediately respond to requests for comment on the proposed professional impersonation law
1
. The legislation would still allow educational explanations about general concepts, but lawmakers want to avoid scenarios in which a chatbot confidently instructs someone how to treat a medical condition or interpret a legal contract2
. As technology regulation often spreads from one jurisdiction to another, laws enacted in large states like New York frequently become models for similar legislation elsewhere2
. The emphasis on data privacy and consumer protection signals that the broader trend toward regulating artificial intelligence appears unlikely to slow, with AI's growing influence prompting lawmakers to ask whether the technology should face rules similar to those that govern traditional professions.Summarized by
Navi
[1]
06 Aug 2025•Policy and Regulation

29 Oct 2025•Policy and Regulation

29 Sept 2025•Technology

1
Technology

2
Policy and Regulation

3
Business and Economy
