Curated by THEOUTPOST
On Tue, 13 Aug, 12:01 AM UTC
2 Sources
[1]
FCC Wants Companies to Disclose the Use of AI in Calls, Texts
The FCC is amping up its fight against AI-generated phone calls with new rules that would require companies to disclose to customers any use of AI in calls or texts. "That means before any one of us gives our consent for calls from companies and campaigns, they need to tell us if they are using this technology," says FCC Chair Jessica Rosenworcel. "It also means that callers using AI-generated voices need to disclose that at the start of a call." The move comes after the FCC fined Democratic consultant Steve Kramer $6 million for allegedly orchestrating an AI deepfake of President Biden's voice ahead of the New Hampshire presidential primary. The agency then declared that robocalls that use AI-generated voices are illegal, citing a 1991 law designed to protect consumers from pre-recorded automated calls. In late June, Rosenworcel sent letters to the CEOs of nine telecom providers, asking about the measures they were taking to avoid fraudulent robocalls that spread misinformation. The proposed new rules build on that by requiring AI disclosures. "This provides consumers with an opportunity to identify and avoid those calls or texts that contain an enhanced risk of fraud and other scams," the FCC says. For the purpose of the new ruling, the FCC suggests defining an AI-generated call as "a call that uses any technology or tool to generate an artificial or prerecorded voice or a text using computational technology or other machine learning, including predictive algorithms and large language models, to process natural language and produce voice or text content to communicate with a called party over an outbound telephone call." The FCC notes that it wants to ensure any rules protect "positive uses of AI to help people with disabilities utilize the telephone networks." It now seeks public comment on the rules ahead of a final vote. Rosenworcel's proposal has the support of her colleagues, though GOP Commissioner Brendan Carr warned, "there is a risk of overdoing it [with AI regulation] early on." Commissioner Nathan Simington, also a Republican, stressed that any rules should not allow for actively monitoring phone calls. "The idea that the commission would put its imprimatur on even the suggestion of ubiquitous third-party monitoring of telephone calls for the putative purpose of 'safety' is beyond the pale," he said. The proposal highlights several scam call detection technologies in development. Google's solution taps Gemini Nano and would run locally on a smartphone without connecting to the internet. Microsoft offers Azure Operator Call Protection for telecom operators.
[2]
FCC proposes new AI rules for robocallers to ignore
That said, the FCC is considering exceptions for individuals with speech or hearing disabilities using AI-generated voice software. If you're tired of blocking AI-generated calls and messages on your Android phone or iPhone, you'll be pleased to know that the Federal Communications Commission (FCC) is intensifying its fight against robocallers who use AI in their communications. As part of this initiative, the regulator is trying to propose new rules that will make it mandatory for callers to explicitly state whether they intend to use AI in their calls and text messages at any point in the future. These disclosures will also have to be used for AI-generated calls, which the FCC notes "contain an enhanced risk of fraud and other scams."
Share
Share
Copy Link
The Federal Communications Commission (FCC) is considering new regulations that would require companies to disclose the use of artificial intelligence in robocalls and texts. This move aims to combat the rising threat of AI-generated scams and misinformation.
The Federal Communications Commission (FCC) is taking proactive steps to address the growing concern of artificial intelligence (AI) being used in robocalls and text messages. In a recent announcement, the regulatory body proposed new rules that would require companies to disclose when they use AI-generated voices or texts in their communications with consumers 1.
Under the new proposal, businesses would be obligated to inform recipients if the voice or text they receive is AI-generated. This move comes as part of the FCC's ongoing efforts to combat the rising tide of scams and misinformation facilitated by advanced AI technologies 2.
The proposed rules would fall under the existing Telephone Consumer Protection Act (TCPA), which already regulates telemarketing calls and text messages. By extending these regulations to cover AI-generated content, the FCC aims to provide consumers with greater transparency and control over the communications they receive.
FCC Chairwoman Jessica Rosenworcel emphasized the potential dangers of AI-powered robocalls, stating that they pose a unique threat due to their ability to mimic the voices of familiar individuals, including friends and family members [1]. This capability has raised concerns about the potential for sophisticated scams and deepfake audio messages that could mislead unsuspecting recipients.
If implemented, these new rules would have significant implications for companies that utilize AI in their customer communications. Businesses would need to develop systems to clearly identify and disclose the use of AI-generated content in their robocalls and text messages [2]. This could potentially impact a wide range of industries, from telemarketing and customer service to political campaigns and public announcements.
As part of the regulatory process, the FCC has opened a public comment period to gather feedback on the proposed rules. This allows stakeholders, including businesses, consumer advocacy groups, and individual citizens, to voice their opinions and concerns about the potential regulations [1].
The FCC's proposal is part of a larger trend of increased scrutiny and regulation of AI technologies across various sectors. As AI becomes more sophisticated and widespread, government agencies and lawmakers are grappling with how to ensure its responsible use while protecting consumers from potential harm [2].
While the proposed rules aim to increase transparency, there may be challenges in their implementation. Determining what constitutes AI-generated content and how to effectively disclose this information in a way that is meaningful to consumers are just some of the issues that will need to be addressed as these regulations take shape [1][2].
Reference
[2]
Lingo Telecom agrees to pay a $1 million fine for facilitating AI-generated robocalls impersonating President Joe Biden during the New Hampshire primary. The incident highlights growing concerns over AI misuse in political campaigns.
16 Sources
The FCC has imposed a $6 million fine on a political consultant for using AI to create fake robocalls impersonating President Biden. This marks a significant step in combating AI-generated misinformation in political campaigns.
6 Sources
The Federal Trade Commission (FTC) has initiated a major effort to combat misleading artificial intelligence claims and fraudulent AI-powered businesses. This action aims to protect consumers and maintain fair competition in the rapidly evolving AI market.
12 Sources
The Federal Trade Commission (FTC) has initiated a campaign to combat deceptive AI product claims and scams. The agency is targeting five companies for potential violations, signaling increased scrutiny of the AI industry.
2 Sources
The Federal Trade Commission (FTC) has initiated "Operation AI Comply," targeting five companies for allegedly making false or misleading claims about their AI products and services. This action marks a significant step in regulating AI-related marketing practices.
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved