2 Sources
2 Sources
[1]
Bipartisan Legislation Targets Rising Threat of AI-Powered Impersonation and Fraud - Decrypt
Hackers used AI to impersonate White House Chief of Staff Susie Wiles and Secretary of State Marco Rubio in May and July. Congress is cracking down on AI-powered scams with bipartisan legislation that would send fraudsters to prison for decades after brazen impersonation attacks targeted America's top officials. The AI Fraud Deterrence Act, introduced by Rep. Ted Lieu (D-CA) and Rep. Neal Dunn (R-FL) on Tuesday, would raise maximum fines to $2 million and extend prison sentences up to 30 years for bank fraud committed with AI assistance, according to a Tuesday statement. The legislation targets wire fraud, mail fraud, money laundering, and impersonation of federal officials. "AI has lowered the barrier of entry for scammers, which can have devastating effects," Lieu said in the statement, warning that impersonations of U.S. officials "can be disastrous for our national security." The bill comes after scammers used AI a few months back to breach White House Chief of Staff Susie Wiles's cellphone, impersonating her voice in calls to senators, governors, business leaders, and other high-level contacts. Two months later, fraudsters mimicked Secretary of State Marco Rubio's voice in calls to three foreign ministers, a member of Congress, and a governor in an apparent attempt to obtain sensitive information and account access, according to the bill. The bill adopts the 2020 National AI Initiative Act's definition of AI and carves out First Amendment protections, exempting satire, parody, and other expressive uses that include a clear disclosure of inauthenticity. AI-aided mail and wire fraud would carry up to 20 years and a $1 million in fines, with standard penalties rising to $2 million. AI-driven bank fraud could draw 30 years and a $2 million penalty. AI-assisted money laundering would carry up to 20 years in prison and fines of $1 million or three times the transaction value, and AI impersonation of federal officials would bring three years and a $1 million penalty. "AI is advancing at a rapid pace, and our laws have to keep pace with it," Dunn noted, cautioning that when criminals use AI to steal identities or defraud Americans, "the consequences should be severe enough to match the crime." Meanwhile, President Trump is reportedly weighing an executive order to dismantle state AI laws and assert federal primacy, even as more than 200 state lawmakers urge Congress to reject House Republicans' push to fold an AI-preemption clause into the defense bill. A similar moratorium collapsed in July after a 99-1 Senate vote, and opposition has since widened, though a draft order circulating last week shows the White House considering its own path to override state rules. Mohith Agadi, co-founder of Provenance AI, an AI agent and fact-checking SaaS backed by Fact Protocol, told Decrypt that the bipartisan nature of this bill points to a growing consensus that "AI-driven impersonation and fraud demand urgent action." "The real challenge is proving in court that AI was used," Agadi said. "Synthetic content can be difficult to attribute, and existing forensic tools are inconsistent." "Lawmakers need to pair these penalties with investments in digital forensics and provenance systems like C2PA that clearly document a content's origin," he noted, or else we risk creating laws that are "conceptually strong but practically hard to enforce."
[2]
New AI fraud bill will seek to criminalize deepfakes of federal officials
The bill is being spearheaded by Representative Ted Lieu, D-Ca. and Representative Neal Dunn, R-Fl.Kent Nishimura / Bloomberg via Getty Images file Lawmakers are hoping to address the increasing use of artificial intelligence by fraudsters in a new proposal that would seek to expand penalties for AI scams and criminalize the impersonation of federal officials with AI. The AI Fraud Deterrence Act, which is set to be proposed Tuesday by Representative Ted Lieu, D-Ca., and Representative Neal Dunn, R-Fl., would update criminal definitions and penalties for fraud to account for the rise of AI. "As AI technology advances at a rapid pace, our laws must keep up," Dunn said in a statement announcing the bill. "The AI Fraud Deterrence Act strengthens penalties for crimes related to fraud committed with the help of AI. I am proud to co-lead this legislation to protect the identities of the public and prevent misuse of this innovative technology," Dunn said. "The majority of American people want sensible guardrails on AI. They don't think a complete Wild West is helpful," Lieu told NBC News last week. The proposed law would double the maximum penalty for defrauding financial institutions from $1 to $2 million when AI is knowingly used as part of the crime. The bill would also explicitly include AI-mediated deception in the definitions of both mail fraud and wire fraud, the latter more commonly known for covering fraud involving "radio or television communication in interstate or foreign commerce," opening up the explicit possibility of charging individuals who use AI to conduct either type of fraud. Both would be punishable by $1 million in fines and up to 20 and 30 years in jail, respectively. The draft also criminalizes the impersonation of federal officials with AI deepfakes, citing AI's use in attempts to successfully mimic White House Chief of Staff Susie Wiles and Secretary of State Marco Rubio earlier this year. While fraud has existed for millennia, experts say AI could exacerbate it by easing access to fraud-making tools and increasing the quality of fraudulent outputs. People who, pre-AI, would not have expended the energy required to commit fraud might now be unbothered by entering a few phrases into an image- or video-generation software to generate a fraudulent image or document. By using AI, fraudsters can also create higher-quality faked media or documents compared to often-sloppy or clearly faked manual efforts. In December, the FBI warned that "generative AI reduces the time and effort criminals must expend to deceive their targets." The alert further cautioned that AI "can correct for human errors that might otherwise serve as warning signs for fraud." As reported by the New York Times, expense- and reimbursement-management companies like Expensify, AppZen, and SAP's Concur all implemented tools to screen for fraudulent, AI-generated receipts earlier this year. AppZen said that roughly 14% of all fraudulent documents submitted in September were generated by AI, up from zero AI-fueled incidents a year before. Maura R. Grossman, a research professor of computer science at the University of Waterloo and a practicing lawyer, told NBC News that AI enables a new era of deception: "AI presents a scale, a scope, and a speed for fraud that is very, very different from frauds in the past." Many observers worry that existing institutions, like courts, cannot keep up with AI's rapid development. "AI years are dog years," said Hany Farid, professor of computer science at the University of California, Berkeley and co-founder of GetReal Security, a leading digital-media authentication company, referencing the speed of AI progress. Whereas AI-generated images could previously be identified by the appearance of extra feet or hands due to the rudimentary nature of prior image-generation models, today's image-generation models are much more accurate. The FBI's warning from December urged individuals to search for discrepancies in images and videos to identify AI-generated media: "Look for subtle imperfections in images and videos, such as distorted hands or feet," the alert said. But to Farid, this 11-month-old advice is wrong and even harmful. "The multiple hands trick, that's not true anymore," Farid said. "You can't look for hands or feet. None of that stuff works." Emphasizing the importance of labelling AI-generated content, Rep. Lieu and Rep. Dunn's proposed bill clarifies that there is a time and place for AI-generated media. Tuesday's draft includes a carveout for AI in satire or other acts protected by the First Amendment, "provided such content includes clear disclosure that it is not authentic."
Share
Share
Copy Link
Congress introduces bipartisan legislation to combat AI-powered fraud and impersonation, proposing severe penalties including up to 30 years in prison and $2 million fines for AI-assisted crimes targeting federal officials and financial institutions.

Congress is taking decisive action against the growing threat of AI-powered fraud with the introduction of the bipartisan AI Fraud Deterrence Act. Representatives Ted Lieu (D-CA) and Neal Dunn (R-FL) unveiled the legislation on Tuesday, proposing severe penalties for criminals who exploit artificial intelligence to commit fraud and impersonate federal officials
1
2
.The proposed legislation would dramatically increase penalties for AI-assisted crimes, with maximum fines reaching $2 million and prison sentences extending up to 30 years for bank fraud committed with AI assistance. The bill specifically targets wire fraud, mail fraud, money laundering, and the impersonation of federal officials using AI technology
1
.The urgency behind this legislation stems from recent high-profile incidents where scammers successfully used AI to breach the security of America's top officials. In May, fraudsters used AI to impersonate White House Chief of Staff Susie Wiles, making calls to senators, governors, business leaders, and other high-level contacts using her synthesized voice
1
2
.Two months later, criminals mimicked Secretary of State Marco Rubio's voice in calls to three foreign ministers, a member of Congress, and a governor in an apparent attempt to obtain sensitive information and account access. These incidents highlight the national security implications of AI-powered impersonation attacks
1
.The AI Fraud Deterrence Act establishes a tiered penalty system based on the type of crime committed. AI-aided mail and wire fraud would carry sentences of up to 20 years and fines of $1 million, with standard penalties rising to $2 million for more severe cases. The most serious offense, AI-driven bank fraud, could result in 30 years in prison and a $2 million penalty
1
.AI-assisted money laundering would carry sentences of up to 20 years and fines of $1 million or three times the transaction value, whichever is greater. The specific crime of AI impersonation of federal officials would bring three years in prison and a $1 million penalty
1
.Related Stories
Experts warn that AI has fundamentally changed the fraud landscape by lowering barriers to entry and improving the quality of fraudulent content. The FBI issued a warning in December stating that "generative AI reduces the time and effort criminals must expend to deceive their targets" and "can correct for human errors that might otherwise serve as warning signs for fraud"
2
.Expense management companies have reported a dramatic increase in AI-generated fraudulent documents. AppZen revealed that roughly 14% of all fraudulent documents submitted in September were generated by AI, up from zero AI-fueled incidents a year before
2
.While the legislation proposes strong penalties, experts acknowledge significant enforcement challenges. Mohith Agadi, co-founder of Provenance AI, noted that "the real challenge is proving in court that AI was used" because "synthetic content can be difficult to attribute, and existing forensic tools are inconsistent"
1
.The bill includes important First Amendment protections, carving out exemptions for satire, parody, and other expressive uses that include clear disclosure of inauthenticity. This provision ensures that legitimate creative and commentary uses of AI-generated content remain protected while targeting malicious applications
1
2
.Summarized by
Navi
[1]
1
Business and Economy

2
Technology

3
Policy and Regulation
