2 Sources
[1]
Berkeley non-profit works to counter election threat posed by A.I.
AI-generated "deepfakes" are already making their way into the 2024 election cycle. Lauren Toms reports. (9-5-24) Website: http://kpix.com YouTube: http://www.youtube.com/CBSSanFrancisco Facebook: http://www.facebook.com/CBSSanFrancisco Instagram: http://www.instagram.com/KPIXtv Twitter: http://twitter.com/KPIXtv
[2]
Berkeley non-profit works to counter AI election disinformation
Deepfakes - or falsified images made using AI - are already making their way into the 2024 election cycle. With some using the technology to try to defame candidates or persuade voters. But a Berkeley-based non-profit is hoping to educate others of the technology's potential harm. "When you see something that makes your enemy look bad, even if you're not sure if it's true, it can enrage you pretty seriously," Lucas Hansen, the co-founder of CivAI, told CBS News Bay Area. "There's something particular about images and video that just short circuits to the emotions of your brain," he explained. "People have been writing inflammatory text on the internet for as long as the internet has existed, but it just didn't have the same effect. When you see an image or video, it feels real, even if you kind of know that it's not." Just in the last month, deepfakes have been reposted on social media showing Swifties supporting former president Donald Trump, and a video claiming Vice President Kamala Harris' involvement in a car accident. Both instances have been debunked by experts. Through CivAI's technology that shows users just how easy it is to mistake a deepfake for a real image, Hansen is hoping to educate others on how to not be fooled. "Part of what we want to do is education, not just to prepare people for the things that are happening, but so that society can collectively make this decision instead of it being something that happens in an office somewhere in Palo Alto," he said. Regulation is on the rise, but experts say it may be coming too late to make an impact on this election cycle. In August, legislation was approved that will require online platforms to remove deceptive content like deepfakes and AI generated images, in the leadup to an election. But it won't take effect until 2025. AI expert Ahmed Banafa of San Jose State University says the speed of AI development is moving too fast for regulation to catch up. "I'm really concerned about it because this level of AI can do magic, it can do situation pictures, events it can be done at the right moment and at least it will create some kind of confusion just to make sure this is the impact of the event," he told CBS News Bay Area. Assemblymember Marc Berman, who represents part of Silicon Valley and co-sponsored the legislation, said the bill "will ensure that online platforms restrict the spread of election-related deceptive deepfakes meant to deceive or disenfranchise voters based on fraudulent content." "Advances in AI over the last few years make it too easy for practically anyone to generate this hyper-realistic content. AB 2655's passage through the Legislature is a win for California's voters, and for our democracy," he added. For Hansen, he's hoping we can get ahead of the curve by showing people how real, fake content is impacting us. "So when this content is going to be used to try to trick or misinform people, then it's important that people know that they ought to be skeptical when they're looking at an image that's just in order to operate in this world," he said. "That's now something like you can't believe everything that you read on the internet."
Share
Copy Link
A Berkeley-based non-profit organization is taking proactive steps to counter the threat of AI-generated disinformation in upcoming elections. The group is developing tools to detect and combat fake content that could mislead voters.
As artificial intelligence technology continues to advance, a new challenge has emerged in the realm of election integrity. AI-generated disinformation poses a significant threat to the democratic process, with the potential to mislead voters and manipulate public opinion. In response to this growing concern, a Berkeley-based non-profit organization is taking proactive measures to combat this issue 1.
Artificial intelligence has made it increasingly easy to create and disseminate false or misleading information. From deepfake videos to fabricated news articles, AI-powered tools can produce convincing content that is difficult for the average person to distinguish from genuine sources. This capability has raised alarms among election officials and cybersecurity experts, who fear that such technology could be used to sway voter opinions or undermine faith in the electoral process 2.
The Berkeley-based non-profit organization, whose name is not specified in the sources, is at the forefront of efforts to counter AI-generated election disinformation. Their approach involves developing sophisticated tools and strategies to detect and combat fake content. By leveraging their own AI capabilities, the organization aims to stay one step ahead of those who would use the technology for malicious purposes 1.
Recognizing the importance of a coordinated response, the non-profit is working closely with election officials to implement their anti-disinformation measures. This collaboration ensures that the tools and strategies developed are tailored to the specific needs of election administrators and can be effectively integrated into existing security protocols 2.
The work being done by this Berkeley non-profit has implications that extend far beyond a single election cycle. As AI technology continues to evolve, the threat of sophisticated disinformation campaigns is likely to grow. By developing effective countermeasures now, the organization is helping to safeguard the integrity of future elections and protect the foundations of democratic discourse 1 2.
Despite the promising efforts of the Berkeley non-profit, combating AI-generated disinformation remains a complex challenge. The rapid pace of technological advancement means that those seeking to spread false information are constantly developing new techniques. As such, the fight against election disinformation will require ongoing vigilance, innovation, and collaboration between tech experts, policymakers, and election officials 2.
Google launches its new Pixel 10 series, featuring improved AI capabilities, enhanced camera systems, and the new Tensor G5 chip. The lineup includes the base Pixel 10, Pixel 10 Pro, Pixel 10 Pro XL, and Pixel 10 Pro Fold, all showcasing Google's commitment to AI-driven smartphone technology.
70 Sources
Technology
1 day ago
70 Sources
Technology
1 day ago
Google launches its new Pixel 10 smartphone series, featuring advanced AI capabilities powered by Gemini, aiming to challenge competitors in the premium handset market.
24 Sources
Technology
23 hrs ago
24 Sources
Technology
23 hrs ago
Google's latest Pixel Watch 4 introduces a curved display, AI-powered health coaching, and satellite communication, setting new standards in the smartwatch market.
19 Sources
Technology
23 hrs ago
19 Sources
Technology
23 hrs ago
FieldAI, an Irvine-based startup, has raised $405 million to develop "foundational embodied AI models" for various robots, aiming to create adaptable and safe AI systems for real-world applications.
8 Sources
Technology
23 hrs ago
8 Sources
Technology
23 hrs ago
Mustafa Suleyman, CEO of Microsoft AI, cautions about the risks of AI systems that appear conscious, urging the industry to avoid creating illusions of sentience in AI products.
5 Sources
Technology
23 hrs ago
5 Sources
Technology
23 hrs ago