3 Sources
[1]
It's "frighteningly likely" many US courts will overlook AI errors, expert says
Order in the court! Order in the court! Judges are facing outcry over a suspected AI-generated order in a court. Fueling nightmares that AI may soon decide legal battles, a Georgia court of appeals judge, Jeff Watkins, explained why a three-judge panel vacated an order last month that appears to be the first known ruling in which a judge sided with someone seemingly relying on fake AI-generated case citations to win a legal fight. Now, experts are warning that judges overlooking AI hallucinations in court filings could easily become commonplace, especially in the typically overwhelmed lower courts. And so far, only two states have moved to force judges to sharpen their tech competencies and adapt so they can spot AI red flags and theoretically stop disruptions to the justice system at all levels. The recently vacated order came in a Georgia divorce dispute, where Watkins explained that the order itself was drafted by the husband's lawyer, Diana Lynch. That's a common practice in many courts, where overburdened judges historically rely on lawyers to draft orders. But that protocol today faces heightened scrutiny as lawyers and non-lawyers increasingly rely on AI to compose and research legal filings, and judges risk rubberstamping fake opinions by not carefully scrutinizing AI-generated citations. The errant order partly relied on "two fictitious cases" to deny the wife's petition -- which Watkins suggested were "possibly 'hallucinations' made up by generative-artificial intelligence" -- as well as two cases that had "nothing to do" with the wife's petition. Lynch was hit with $2,500 in sanctions after the wife appealed, and the husband's response -- which also appeared to be prepared by Lynch -- cited 11 additional cases that were "either hallucinated" or irrelevant. Watkins was further peeved that Lynch supported a request for attorney's fees for the appeal by citing "one of the new hallucinated cases," writing it added "insult to injury." Worryingly, the judge could not confirm whether the fake cases were generated by AI or even determine if Lynch inserted the bogus cases into the court filings, indicating how hard it can be for courts to hold lawyers accountable for suspected AI hallucinations. Lynch did not respond to Ars' request to comment, and her website appeared to be taken down following media attention to the case. But Watkins noted that "the irregularities in these filings suggest that they were drafted using generative AI" while warning that many "harms flow from the submission of fake opinions." Exposing deceptions can waste time and money, and AI misuse can deprive people of raising their best arguments. Fake orders can also soil judges' and courts' reputations and promote "cynicism" in the justice system. If left unchecked, Watkins warned, these harms could pave the way to a future where a "litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity." "We have no information regarding why Appellee's Brief repeatedly cites to nonexistent cases and can only speculate that the Brief may have been prepared by AI," Watkins wrote. Ultimately, Watkins remanded the case, partly because the fake cases made it impossible for the appeals court to adequately review the wife's petition to void the prior order. But no matter the outcome of the Georgia case, the initial order will likely forever be remembered as a cautionary tale for judges increasingly scrutinized for failures to catch AI misuses in court. "Frighteningly likely" judge's AI misstep will be repeated John Browning, a retired justice on Texas' Fifth Court of Appeals and now a full-time law professor at Faulkner University, last year published a law article Watkins cited that warned of the ethical risks of lawyers using AI. In the article, Browning emphasized that the biggest concern at that point was that lawyers "will use generative AI to produce work product they treat as a final draft, without confirming the accuracy of the information contained therein or without applying their own independent professional judgment." Today, judges are increasingly drawing the same scrutiny, and Browning told Ars he thinks it's "frighteningly likely that we will see more cases" like the Georgia divorce dispute, in which "a trial court unwittingly incorporates bogus case citations that an attorney includes in a proposed order" or even potentially in "proposed findings of fact and conclusions of law." "I can envision such a scenario in any number of situations in which a trial judge maintains a heavy docket and looks to counsel to work cooperatively in submitting proposed orders, including not just family law cases but other civil and even criminal matters," Browning told Ars. According to reporting from the National Center for State Courts, a nonprofit representing court leaders and professionals who are advocating for better judicial resources, AI tools like ChatGPT have made it easier for high-volume filers and unrepresented litigants who can't afford attorneys to file more cases, potentially further bogging down courts. Peter Henderson, a researcher who runs the Princeton Language+Law, Artificial Intelligence, & Society (POLARIS) Lab, told Ars that he expects cases like the Georgia divorce dispute aren't happening every day just yet. It's likely that a "few hallucinated citations go overlooked" because generally, fake cases are flagged through "the adversarial nature of the US legal system," he suggested. Browning further noted that trial judges are generally "very diligent in spotting when a lawyer is citing questionable authority or misleading the court about what a real case actually said or stood for." Henderson agreed with Browning that "in courts with much higher case loads and less adversarial process, this may happen more often." But Henderson noted that the appeals court catching the fake cases is an example of the adversarial process working. While that's true in this case, it seems likely that anyone exhausted by the divorce legal process, for example, may not pursue an appeal if they don't have energy or resources to discover and overturn errant orders. Judges' AI competency increasingly questioned While recent history confirms that lawyers risk being sanctioned, fired from their firms, or suspended from practicing law for citing fake AI-generated cases, judges will likely only risk embarrassment for failing to catch lawyers' errors or even for using AI to research their own opinions. Not every judge is prepared to embrace AI without proper vetting, though. To shield the legal system, some judges have banned AI. Others have required disclosures -- with some even demanding to know which specific AI tool was used -- but that solution has not caught on everywhere. Even if all courts required disclosures, Browning pointed out that disclosures still aren't a perfect solution since "it may be difficult for lawyers to even discern whether they have used generative AI," as AI features become increasingly embedded in popular legal tools. One day, it "may eventually become unreasonable to expect" lawyers "to verify every generative AI output," Browning suggested. Most likely -- as a judicial ethics panel from Michigan has concluded -- judges will determine "the best course of action for their courts with the ever-expanding use of AI," Browning's article noted. And the former justice told Ars that's why education will be key, for both lawyers and judges, as AI advances and becomes more mainstream in court systems. In an upcoming summer 2025 article in The Journal of Appellate Practice & Process, "The Dawn of the AI Judge," Browning attempts to soothe readers by saying that AI isn't yet fueling a legal dystopia. And humans are unlikely to face "robot judges" spouting AI-generated opinions any time soon, the former justice suggested. Standing in the way of that, at least two states -- Michigan and West Virginia -- "have already issued judicial ethics opinions requiring judges to be 'tech competent' when it comes to AI," Browning told Ars. And "other state supreme courts have adopted official policies regarding AI," he noted, further pressuring judges to bone up on AI. Meanwhile, several states have set up task forces to monitor their regional court systems and issue AI guidance, while states like Virginia and Montana have passed laws requiring human oversight for any AI systems used in criminal justice decisions. Judges must prepare to spot obvious AI red flags Until courts figure out how to navigate AI -- a process that may look different from court to court -- Browning advocates for more education and ethical guidance for judges to steer their use and attitudes about AI. That could help equip judges to avoid both ignorance of the many AI pitfalls and overconfidence in AI outputs, potentially protecting courts from AI hallucinations, biases, and evidentiary challenges sneaking past systems requiring human review and scrambling the court system. An overlooked part of educating judges could be exposing AI's influence so far in courts across the US. Henderson's team is planning research that tracks which models attorneys are using most in courts. That could reveal "the potential legal arguments that these models are pushing" to sway courts -- and which judicial interventions might be needed, Henderson told Ars. "Over the next few years, researchers -- like those in our group, the POLARIS Lab -- will need to develop new ways to track the massive influence that AI will have and understand ways to intervene," Henderson told Ars. "For example, is any model pushing a particular perspective on legal doctrine across many different cases? Was it explicitly trained or instructed to do so?" Henderson also advocates for "an open, free centralized repository of case law," which would make it easier for everyone to check for fake AI citations. "With such a repository, it is easier for groups like ours to build tools that can quickly and accurately verify citations," Henderson said. That could be a significant improvement to the current decentralized court reporting system that often obscures case information behind various paywalls. Dazza Greenwood, who co-chairs MIT's Task Force on Responsible Use of Generative AI for Law, did not have time to send comments but pointed Ars to a LinkedIn thread where he suggested that a structural response may be needed to ensure that all fake AI citations are caught every time. He recommended that courts create "a bounty system whereby counter-parties or other officers of the court receive sanctions payouts for fabricated cases cited in judicial filings that they reported first." That way, lawyers will know that their work will "always" be checked and thus may shift their behavior if they've been automatically filing AI-drafted documents. In turn, that could alleviate pressure on judges to serve as watchdogs. It also wouldn't cost much -- mostly just redistributing the exact amount of fees that lawyers are sanctioned to AI spotters. Novel solutions like this may be necessary, Greenwood suggested. Responding to a question asking if "shame and sanctions" are enough to stop AI hallucinations in court, Greenwood said that eliminating AI errors is imperative because it "gives both otherwise generally good lawyers and otherwise generally good technology a bad name." Continuing to ban AI or suspend lawyers as a preferred solution risks dwindling court resources just as cases likely spike rather than potentially confronting the problem head-on. Of course, there's no guarantee that the bounty system would work. But "would the fact of such definite confidence that your cures will be individually checked and fabricated cites reported be enough to finally... convince lawyers who cut these corners that they should not cut these corners?" In absence of a fake case detector like Henderson wants to build, experts told Ars that there are some obvious red flags that judges can note to catch AI-hallucinated filings. Any case number with "123456" in it probably warrants review, Henderson told Ars. And Browning noted that AI tends to mix up locations for cases, too. "For example, a cite to a purported Texas case that has a 'S.E. 2d' reporter wouldn't make sense, since Texas cases would be found in the Southwest Reporter," Browning said, noting that some appellate judges have already relied on this red flag to catch AI misuses. Those red flags would perhaps be easier to check with the open source tool that Henderson's lab wants to make, but Browning said there are other tell-tale signs of AI usage that anyone who has ever used a chatbot is likely familiar with. "Sometimes a red flag is the language cited from the hallucinated case; if it has some of the stilted language that can sometimes betray AI use, it might be a hallucination," Browning said. Judges already issuing AI-assisted opinions Several states have assembled task forces like Greenwood's to assess the risks and benefits of using AI in courts. In Georgia, the Judicial Council of Georgia Ad Hoc Committee on Artificial Intelligence and the Courts released a report in early July providing "recommendations to help maintain public trust and confidence in the judicial system as the use of AI increases" in that state. Adopting the committee's recommendations could establish "long-term leadership and governance"; a repository of approved AI tools, education, and training for judicial professionals; and more transparency on AI used in Georgia courts. But the committee expects it will take three years to implement those recommendations while AI use continues to grow. Possibly complicating things further as judges start to explore using AI assistants to help draft their filings, the committee concluded that it's still too early to tell if the judges' code of conduct should be changed to prevent "unintentional use of biased algorithms, improper delegation to automated tools, or misuse of AI-generated data in judicial decision-making." That means, at least for now, that there will be no code-of-conduct changes in Georgia, where the only case in which AI hallucinations are believed to have swayed a judge has been found. Notably, the committee's report also confirmed that there are no role models for courts to follow, as "there are no well-established regulatory environments with respect to the adoption of AI technologies by judicial systems." Browning, who chaired a now-defunct Texas AI task force, told Ars that judges lacking guidance will need to stay on their toes to avoid trampling legal rights. (A spokesperson for the State Bar of Texas told Ars the task force's work "concluded" and "resulted in the creation of the new standing committee on Emerging Technology," which offers general tips and guidance for judges in a recently launched AI Toolkit.) "While I definitely think lawyers have their own duties regarding AI use, I believe that judges have a similar responsibility to be vigilant when it comes to AI use as well," Browning said. Judges will continue sorting through AI-fueled submissions not just from pro se litigants representing themselves but also from up-and-coming young lawyers who may be more inclined to use AI, and even seasoned lawyers who have been sanctioned up to $5,000 for failing to check AI drafts, Browning suggested. In his upcoming "AI Judge" article, Browning points to at least one judge, 11th Circuit Court of Appeals Judge Kevin Newsom, who has used AI as a "mini experiment" in preparing opinions for both a civil case involving an insurance coverage issue and a criminal matter focused on sentencing guidelines. Browning seems to appeal to judges' egos to get them to study up so they can use AI to enhance their decision-making and possibly expand public trust in courts, not undermine it. "Regardless of the technological advances that can support a judge's decision-making, the ultimate responsibility will always remain with the flesh-and-blood judge and his application of very human qualities -- legal reasoning, empathy, strong regard for fairness, and unwavering commitment to ethics," Browning wrote. "These qualities can never be replicated by an AI tool."
[2]
AI-generated legal filings are making a mess of the judicial system
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. In context: Large language models have already been used to cheat in school and spread misinformation in news reports. Now they're creeping into the courts, fueling bogus filings that judges face amid heavy caseloads - raising new risks for a legal system already stretched thin. A recent Ars Technica report detailed a Georgia appeals court decision highlighting a growing risk for the US legal system: AI-generated hallucinations creeping into court filings and even influencing judicial rulings. In the divorce dispute, the husband's lawyer submitted a draft order peppered with citations to cases that do not exist - likely invented by generative AI tools like ChatGPT. The initial trial court signed off on the document and subsequently ruled in the husband's favor. Only when the wife appealed did the fabricated citations come to light. The appellate panel, led by Judge Jeff Watkins, vacated the order, noting that the bogus cases had undermined the court's ability to review the decision. Watkins didn't mince words, calling the citations possible generative-artificial intelligence hallucinations. The court fined the husband's lawyer $2,500. That might sound like a one-off, but a lawyer was fined $15,000 in February under similar circumstances. Legal experts warn it is likely a sign of things to come. Generative AI tools are notoriously prone to fabricating information with convincing confidence - a behavior labeled "hallucination." As AI becomes more accessible to both overwhelmed lawyers and self-represented litigants, experts say judges will increasingly face filings filled with fake cases, phantom precedents, and garbled legal reasoning dressed up to look legitimate. The problem is compounded by a legal system already stretched thin. In many jurisdictions, judges routinely rubberstamp orders drafted by attorneys. However, the use of AI raises the stakes. Appellate Court Opinion on False Legal Citations via Ars Technica "I can envision such a scenario in any number of situations where a trial judge maintains a heavy docket," said John Browning, a former Texas appellate judge and legal scholar who has written extensively on AI ethics in law. Browning told Ars Technica he thinks it's "frighteningly likely" these kinds of mistakes will become more common. He and other experts warn that courts, especially at the lower levels, are ill-prepared to handle this influx of AI-driven nonsense. Only two states - Michigan and West Virginia - currently require judges to maintain a basic level of "tech competence" when it comes to AI. Some judges have banned AI-generated filings altogether or mandated disclosure of AI use, but these policies are patchy, inconsistent, and hard to enforce due to case volume. Meanwhile, AI-generated filings aren't always obvious. Large language models often invent realistic-sounding case names, plausible citations, and official-sounding legal jargon. Browning notes that judges can watch for telltale signs: incorrect court reporters, placeholder case numbers like "123456," or stilted, formulaic language. However, as AI tools become more sophisticated, these giveaways may fade. Researchers, like Peter Henderson at Princeton's Polaris Lab, are developing tools to track AI's influence on court filings and are advocating for open repositories of legitimate case law to simplify verification. Others have floated novel solutions, such as "bounty systems" to reward those who catch fabricated cases before they slip through. For now, the Georgia divorce case stands as a cautionary tale - not just about careless lawyers, but about a court system that may be too overwhelmed to track AI use in every legal document. As Judge Watkins warned, if AI-generated hallucinations continue slipping into court records unchecked, they threaten to erode confidence in the justice system itself.
[3]
AI isn't just entering law offices -- it's challenging the entire legal playbook
In the legal field obsessed with precedents, AI is sparking a new one. Lawyers are incorporating legal AI "copilots" and related tools into their workflows, driven by a booming landscape of AI legal startups. 2024 marked a record-breaking year for legal-tech startups, which raised $4.98 billion overall, largely propelled by the booming interest in AI. And it hasn't slowed down one bit. This past June, one company that's leading the funding pack, Harvey, raised another $300 million at a $5 billion valuation after securing $300 million at $3 billion just three months prior. In recent weeks, Harvey also struck a major partnership with LexisNexis to integrate the prominent legal database's core research tools into its platform. Days later, law practice management company Clio acquired AI and legal research company vLex for $1 billion in what LawNext referred to as "a landmark deal that will undoubtedly reshape the legal tech landscape." The AI-ification of the legal industry may not be fully realized, but it's certainly underway. "Lawyers are not big R&D people. They're not hackers and experimenters. They are 'tell me what this thing can do. Tell me it is safe to use it, and I'll use it.' We're not yet at a collective sense of that in the legal profession to have hit the threshold of widespread [AI] adoption, but I think it is slowly approaching," said Jordan Furlong, a lawyer and legal sector analyst. "I think we're probably going to hit it maybe within the next couple of years. It's hard to say -- it could be a lot sooner." The impact of AI in the legal sector goes far beyond the tools lawyers use to complete tasks, however, and could dramatically shift legal processes in fundamental ways. Legal professionals now have to deal with the fallout of AI-generated and manipulated evidence and the larger societal erosion of trust in what we see and hear. There is a risk of biased AI systems being used as decision makers in the legal systems. There's also the question of: if everyone gains access to tools to help them navigate legal documents and concerns, will people even need lawyers in the same ways they do today? International law firm Troutman Pepper Locke is using several commercially available AI tools, such as the full suite of Thomson Reuters AI products, and was even an early investor in CoCounsel, Thomas Reuters' generative AI assistant for legal professionals. In 2023, the firm also rolled out an internal AI system it built on OpenAI's GPT-4. Called Athena, it acts as a chatbot-style assistant and is used for tasks from refining draft correspondences for clients to answering questions about the firm's internal processes. According to Troutman Pepper Locke chief innovation officer William Gaus, the firm's staff prompt Athena about 3,000 times every day. The same type of tool that Troutman Pepper Locke built internally is exactly what so many legal AI startups are commercializing too. Legal AI company Filevine, for example, offers a suite of AI features tailored to specific legal tasks, such as a deposition tool to help lawyers analyze testimony in real-time, a tool to automate the case validation process, and a tool to boost research efficiency by summarizing key details from legal documents. Cofounder and CEO Ryan Anderson said the suite of tools is designed to take lawyers through the entire legal journey, but the most popular product is still the more traditional copilot -- similar to Athena -- built to assist with any questions and tasks, which the company calls Chat With Your Case. "I would say it will be a pretty dramatic change in how lawyers work. There's just no way lawyers won't be working with copilots in the next two years," Anderson said. "It's just a routine part of their daily work life." For Trouman Pepper Locke, the firm also found use of its AI tools outside of daily functions, such as using it heavily in the firm's recent merger. Gaus and the merger team itself used Athena as a thought partner, keeping running dialogues for different work streams of the merger such as finance, technology, and marketing. They also created an agentic capability to redraft the bios of the incoming 1,600 attorneys, which needed to be updated to account for the new firm's information and adhere to its existing style. Gaus said this made the process seamless compared to the six months it took to manually update attorney bios with the last merger, saving $200,000 in time spent. Overall, Gaus said they're currently finding AI to be most useful for backend admin tasks, which he believes are a great place to start because they're low risk. Lastly, the team built a functionality within Athena to make it easy for staff and attorneys from the incoming firm, Locke, to quickly get answers to any questions they'd have, from billing processes to IT. "We wanted to make sure to get these people answers as quickly as possible. So if we could build a system, like a generative AI system in this case, that could reliably get them answers to their questions around billing and collections, for example, we would obviously go for something like that over the old school methods," said Gaus. Furlong believes it's safe to say that many tasks lawyers have performed in the past will soon be carried out -- in whole or in part -- by machines. AI, he said, will "give lawyers both the opportunity and the mandate to shift gears" in terms of how they view themselves and the kind of services they provide, adding that their work days will look very different in the coming years. But beyond lawyers' workflows, there's also a range of potential impacts of AI that could reshape the field in much larger ways. For example, tools like ChatGPT or even dedicated legal AI tools for the public could provide legal resources to the many people who have legal problems but can't afford to hire a lawyer. This would be great for accessibility, but a vast increase in people filing legal cases could overwhelm the "already underfunded and backlogged" system of courts and tribunals, Furlong said. Additionally, while he's not worried about judges using AI to make decisions ("because that's what they love to do"), he is worried about a government deciding it doesn't need courts because it'd be cheaper and easier to have people submit their materials to an AI system that makes the decision. This would be particularly concerning because of the long-documented history of bias in decision-making algorithms and the datasets they're trained on across domains. This may seem like a far-out possibility, but it's one that's actively being grappled with. The EU addressed this exactly in the EU AI Act, deeming any AI systems intended to be used by judicial authorities or for interpreting the law as "high risk." The United Nations in a paper also pointed to the "Right to Equality Before the Law" as one of 10 human rights that are at risk with the emergence of generative AI, pointing to the use of technologies for automated decision-making and how AI can facilitate discrimination throughout society. Two of the larger impacts already coming into focus today are hallucinations and AI-generated evidence. The American Bar Association (ABA) recently warned lawyers to beware of deepfakes, which can be used to fabricate evidence or even make ransom demands, as well as the issue that defendants can now easily claim that evidence being used against them was generated or manipulated using AI. In the legal industry and courts system where trust in what one sees and hears is paramount, the notion of AI hurdling society into a state where everything is questionable, and where even real media can be dismissed as AI, is especially destabilizing. "The whole issue of fake evidence -- that one scares the hell out of me. I don't know. I have no idea how we're going to manage this," Furlong said. The ABA similarly stated that "there is still no universally accepted solution." Hallucinations have perhaps garnered the most attention when it comes to AI and the legal industry. Two New York lawyers made headlines for being sanctioned after submitting a legal brief that included fictitious case citations generated by ChatGPT, and many are keeping tabs on a database from lawyer and legal AI researcher Damien Charlotin that tracks legal decisions in cases where lawyers included generative AI-produced hallucinated content. Researchers at Stanford HAI tested several prominent legal research copilots and found that the two that performed the best still made up information in about one out of six instances. To address this, the industry is leaning into citations wherein the AI model directs the user to where it retrieved the information, putting the task of verifying accuracy squarely in the hands of the legal professional. "I find it incredibly exciting. Terrifying for sure. Risky, no question. But really exciting," said Furlong of AI's overall impact on the industry. "There is so much that we could do if we get this right."
Share
Copy Link
A recent case in Georgia highlights the risks of AI-generated content in legal proceedings, raising concerns about the integrity of the judicial system and the need for increased vigilance among judges and lawyers.
In a recent Georgia divorce case, the use of artificial intelligence (AI) in legal filings has raised significant concerns about the integrity of the US judicial system. A court order, which was later vacated, relied on fictitious cases that were likely generated by AI, highlighting the potential for widespread disruption in legal proceedings 1.
Source: TechSpot
The case in question involved a divorce dispute where the husband's lawyer, Diana Lynch, drafted an order that was initially approved by the court. However, upon appeal, it was discovered that the order cited two non-existent cases, possibly hallucinated by generative AI. Judge Jeff Watkins, who led the appellate panel, noted that the fake citations made it impossible to adequately review the wife's petition 1.
Experts warn that this incident is likely not isolated and could become increasingly common. John Browning, a retired justice and law professor, believes it's "frighteningly likely" that more cases will emerge where trial courts unwittingly incorporate bogus case citations 1. The problem is exacerbated by the heavy caseloads in many courts, particularly at lower levels, where judges often rely on lawyers to draft orders 2.
Identifying AI-generated content in legal filings presents a significant challenge. Large language models can produce realistic-sounding case names, plausible citations, and official-sounding legal jargon. While there are some telltale signs, such as incorrect court reporters or placeholder case numbers, these may become less obvious as AI tools become more sophisticated 2.
Source: Fortune
Despite these challenges, the legal tech industry is booming. In 2024, legal-tech startups raised a record-breaking $4 billion, largely driven by interest in AI. Companies like Harvey and vLex are leading the charge, with significant funding rounds and acquisitions reshaping the legal tech landscape 3.
While AI tools offer potential benefits in terms of efficiency and cost-saving, they also present significant risks. Jordan Furlong, a legal sector analyst, predicts that many tasks traditionally performed by lawyers will soon be carried out, at least in part, by machines 3. This shift could dramatically alter the role of lawyers and the nature of legal services.
Source: Ars Technica
In response to these challenges, some jurisdictions are taking action. Michigan and West Virginia now require judges to maintain a basic level of "tech competence" when it comes to AI 2. Researchers are also developing tools to track AI's influence on court filings and advocating for open repositories of legitimate case law to simplify verification.
As the legal profession grapples with the implications of AI, it's clear that increased vigilance, education, and potentially new regulations will be necessary to maintain the integrity of the judicial system in the face of rapidly advancing technology.
NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather, potentially improving the protection of Earth's critical infrastructure from solar storms.
5 Sources
Technology
3 hrs ago
5 Sources
Technology
3 hrs ago
Meta introduces an AI-driven voice translation feature for Facebook and Instagram creators, enabling automatic dubbing of content from English to Spanish and vice versa, with plans for future language expansions.
8 Sources
Technology
19 hrs ago
8 Sources
Technology
19 hrs ago
OpenAI CEO Sam Altman reveals plans for GPT-6, focusing on memory capabilities to create more personalized and adaptive AI interactions. The upcoming model aims to remember user preferences and conversations, potentially transforming the relationship between humans and AI.
2 Sources
Technology
19 hrs ago
2 Sources
Technology
19 hrs ago
Chinese AI companies DeepSeek and Baidu are making waves in the global AI landscape with their open-source models, challenging the dominance of Western tech giants and potentially reshaping the AI industry.
2 Sources
Technology
3 hrs ago
2 Sources
Technology
3 hrs ago
A comprehensive look at the emerging phenomenon of 'AI psychosis', its impact on mental health, and the growing concerns among experts and tech leaders about the psychological risks associated with AI chatbots.
3 Sources
Technology
3 hrs ago
3 Sources
Technology
3 hrs ago