Curated by THEOUTPOST
On Tue, 16 Jul, 8:01 AM UTC
2 Sources
[1]
ECI Should Work with Fact-Checkers To Tackle Misinformation
"The Election Commission should have brought all political parties on a common platform and made them agree to a code of conduct on how they are going to use AI-generated content in their campaigning. To say that political parties should not use AI-generated content at all is to hold on to a very idealistic position, maybe that may not work. But there has to be a code of conduct...and I'm sure the Election Commission does have the wherewithal to do it," Jency Jacob, Managing Editor at Boom Fact Check, observed while speaking at MediaNama's 'Fact-Checking and Combating Misinformation in Elections' virtual event on July 3. The discussion focused on observations gathered by fact-checkers about the nature of misinformation on social media during the Lok Sabha elections. Along with Jacob, Abhilash Mallick, Editor (Fact-Check), Quint; Kritika Goel, Head of Editorial Operations (India), Logically Facts; Pratik Sinha, Co-founder and Editor, Alt News; Rajneil Kamath, Founder and Publisher, Newscheckerin; Shivam Shankar Singh, Data Analyst and Campaign Consultant; and Tarunima Prabhakar, Co-Founder of Tattle Civic Technologies, talked about the trends of misinformation they observed, effectiveness of the measures undertaken by platforms and the Election Commission of India (ECI) in combating misinformation during elections. In the fact-checkers' assessment, the ECI's actions in fighting misinformation were quite "underwhelming". They believe the ECI could have done better in terms of enforcing existing guidelines, establishing new rules where needed, and in undertaking joint efforts to control the rise of misinformation or disinformation that plagued the elections. The discussants informed that deepfakes did not really have a significant impact on the information ecosystem during the Lok Sabha elections, but political parties have begun experimenting with AI-generated content for political advertising, creating digital clones, memes, satire etc. According to Rajneil Kamath, it is during the upcoming State elections that one can witness greater use of deepfakes. Given potential risks, the speakers highlighted how the ECI failed to address AI-related risks adequately. Jacob stated that the ECI should have established a Code of Conduct signed by all political parties. "...they [ECI] can use the full force of the powers they have to force the election parties to come to a conclusion that what are they going to, what are the boundaries; let there be rules, let there be ground rules, what will they use? What will they not use? What are the no go areas, at least start a discussion. My point is that maybe political parties may not readily agree to do this, but we didn't even see a discussion. I'm surprised that we went into an entire general election without the Election Commission even talking about it," he observed. In May, the ECI released guidelines for "responsible and ethical use of social media platforms" during the Election period. Notably, the guidelines prohibited political parties from using AI-based tools to distort information or spread false content that may affect free and fair elections. Kritika Goel opined that the AI guidelines were vague and delayed. She explained, "They warned people against using AI, or sharing anything, which was likely to manipulate information. It lacked clarity on whether political parties could rely on artificial intelligence for campaigning. We all anticipated AI to be a very big challenge, we all anticipated how parties are going to rely on technology for not just campaigning, but also for luring the voters or passing on some information to the electorate, etc. So, I think these guidelines came in a little too late. And they were also slightly vague in nature." In April, the ECI launched the Myth vs Reality register to debunk fake news related to elections and direct readers to reliable sources for verifying the authenticity of the information in question. Additionally, the ECI had also developed the cVigil mobile application to allow citizens to report incidents of violations of the Model Code of Conduct. Speaking of these initiatives, Kamath and Jacob said that while the ECI attempted at the doing multiple things, it's unclear how effective these measures were. For instance, a report by BoomLive revealed that the cVigil app does not permit users to upload pre-recorded videos, photos, or links to such content, thereby limiting users from reporting violative content commonly found on social media. The ECI issued nine content takedown orders to social media platforms from March 2024 to May 15, 2024, according to the data they provided in response to MediaNama's RTI filed in April. The RTI sought copies of all the content take down directions issued by the ECI to X, YouTube, and other social media platforms from March till the latest data available. When asked whether these numbers made sense, the speakers emphasised that the ECI should have been transparent about the complaints they have received, how many they acted upon, and the process followed to take down online content. Mallick informed that there was some amount of communication with the ECI when the election dates were announced, but the fact-checkers failed to get a response from them later on. "...as things moved on, we reached out to the ECI multiple times for queries, or when we were trying to do some back checks, and we never heard back from them. While there was some amount of communication happening, there wasn't any sort of communication happening with fact-checkers, with probably news publishers as well, on things that needed to be fact-checked, or things that needed to be put out. And even when there were certain tweets or posts that the ECI in different states would put out, they would call something just fake, or they would just say that this was fake, or this is not true and not provide additional context to it. This did not help much in discussing those rumours or fact-checking them," Mallick added. The experts reiterated that there are fact-checkers available, with an existing system, and there are platforms that are willing to work with the ECI. There's a need to bring these resources together and work collaboratively to combat misinformation. "They should have worked very closely with fact-checkers. And they don't need to work with individual fact checkers, I understand that they would have issues with that. But they can work with larger journalistic organizations, not just fact checkers, but even there are television organizations that are print organizations. I don't know whether they have done any campaign along with them, maybe they have, but unfortunately, the ECI always seems to be in firefighting mode, they don't need to be," Jacob stated.
[2]
AI's limited but noteworthy impact on 2024 Elections: Fact-checkers
During the recent 2024 Lok Sabha elections, a video of Bharatiya Janata Party (BJP) Member of Parliament (MP) Dinesh Lal Yadav claiming that unemployment in India is rising because of population growth was shared by Indian Youth Congress President Srinivas BV on X (formerly Twitter). The spread of the video on social media prompted BJP IT cell head Amit Malviya to claim that the video was a deep fake and that it was being shared to "mislead people, create unrest and sow divisions in the society." However, a fact-check by Logically Facts revealed that the video was, in fact, real and not a deepfake. This was one among many such instances where AI and deep fake were brought up during the Parliamentary elections in 2024. Kritika Goel, Head of Editorial Operations (India), Logically Facts, noted that the widespread awareness of AI and deepfakes has made the new technology an effective tool for political parties to deny the truth. She added that people can now deny the things that they said and accuse the evidence of being a deepfake, giving them "plausible deniability" or the "liars dividend." Several organisations, like the World Economic Forum, identified AI-generated misinformation as one of the biggest short-term risks this year; however, panellists speaking at MediaNama's 'Fact-checking and Combating Misinformation in Elections' Discussion attested otherwise. Speaking on 3 July 2024, they noted that AI did not pose a major threat to democracy; instead, parties seemed to be more interested in testing the ways in which AI can be used to spread misinformation. Rajneil Kamath, Founder and Publisher of Newschecker, said, "We didn't see any democracy destabilizing deepfake the way one may have seen in Slovakia, for example, which happened just two days before an election." Instead, we saw "manipulated media using AI that was very viral, that was widely spoken about, especially involving celebrities, for example, and mixing celebrity culture with politics or memes and satire". Goel noted that AI was more widely used for political campaigning during these elections rather than information manipulation. Abhilash Mallick, Editor at Quint Fact Check, concurred and said there were instances where Quint had to inform the public about instances where deepfakes were used for political advertising. Despite not destabilizing democracy, deepfakes and AI-generated misinformation have introduced new problems in public misinformation. Firstly, Goel noted that AI has led to an "erosion of trust", wherein people are less likely to trust verified content, suspecting it to be AI. Goel said, "It has led to planting that seed of doubt in your audiences or your reader's mind, which makes them question even the legitimate information that they're engaging with." Kamath also concurred that with AI, "we don't just have to tell people what may not be correct. We also have to start telling them what is indeed true. So, it's never a false alarm because many things that are true also now have a feature where nobody wants to believe that it is true and therefore think it's misinformation." While AI-generated misinformation may not have been particularly "democracy destabilizing this election", Singh said, " A lot of the AI and deepfakes will get used for testing out hypothesis because the current narratives have reached a certain amount of saturation point. And that is where the technology and potential for misinformation scares me because it will not be a part of a campaign anymore, it will be a testing route." He predicts that deepfakes and AI-generated content will be used to push different narratives by political parties, and the amount of misinformation can be used to determine what resonates with the public. Jency Jacob, Managing Editor at BOOM Fact Check, agreed that political parties are testing deepfake technology, according to his observations. He said that in many instances, AI and deepfakes were used as memes or satire to "make fun of the person whom they are targeting." He said, "I feel that while they knew that this is not going to work and people will be able to see it through because some of these were really poor quality, they were testing it, and they are testing it for the future elections. They're trying to see how this will work, whether the people are receptive to it, whether they will accept it, whether they understand." Jacob observed, "A lot of the videos, I know that people understood, are actually not true videos, but they enjoyed it anyway, and they liked it because it subscribed to their point of view or the political ideology they follow." Thus he warns, "We can't take our eyes off because this is a new tool or it's a new technology that everyone is very excited about. As the tools get better, more and more people,.. the challenge for all the fact-checkers are going to come." Tarunima Prabhakar, Co-founder of Tattle Civic Technologies, said that there needed to be a conversation to determine what is classified as AI-generated misinformation as AI technology becomes more common. She noted that platforms now integrate AI tools allowing people to manipulate their images. Thus she said, "To what extent is this important, specifically in the context of misinformation when it comes to campaigning." Prabhakar also noted that a challenge fact-checkers face with AI-generated content is within the nature of the tools for AI detection. She said, "No tool gives you a yes or no binary answer, right? They always give you probabilistic answers. We're also entering a world in which, because different companies are vying for deepfake detection as a business model, these models are often these detection models are often developed as proprietary technologies. And that makes it actually harder to even understand what these probabilistic scores mean." Thus, "To actually trust the detection side of the game, we probably need more transparent approaches. We need the academic and research community to step in and do some of this work more transparently. Over the last 1 year, 18 months, we have seen less and lesser research being done in the open on this, a lot of it is now actually being done inside companies.", she said. Goel also pointed out that deepfake tools are often not trained to understand regional languages and said that more classifiers were needed. Prabhakar suggested "contextual data sets" as a solution to this. When asked if AI can be deployed to fact-check misinformation, Goel said, "I do think that you can leverage technology, you can rely on technology to make things better to probably for tasks like claim discovery, for other tasks like which other sources you could rely on like a repository or something." However, she said, "There's a lot of local context, there's cultural context that needs to be kept in mind while we're writing our fact check", which may pose a limitation for AI. STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!
Share
Share
Copy Link
The Election Commission of India faces calls for increased transparency and collaboration with fact-checkers to combat misinformation in the upcoming 2024 elections. While AI is expected to have a limited but notable impact, concerns about its potential misuse persist.
As India gears up for the 2024 elections, the Election Commission of India (ECI) is under increasing pressure to address the growing threat of misinformation. Experts are calling for the ECI to work more closely with fact-checkers and ensure greater transparency in its efforts to tackle false information 1. This collaboration is seen as crucial in maintaining the integrity of the electoral process and ensuring that voters have access to accurate information.
While artificial intelligence (AI) is expected to have a limited but noteworthy impact on the 2024 elections, concerns about its potential misuse remain significant 2. Fact-checkers and election observers are particularly worried about the use of AI in creating and spreading misinformation, which could potentially influence voter behavior and undermine the democratic process.
Experts are emphasizing the need for the ECI to be more transparent in its handling of misinformation. This includes sharing data on the number and types of complaints received, actions taken, and the overall impact of misinformation on the electoral process 1. By providing this information, the ECI can build trust with the public and demonstrate its commitment to fair elections.
Fact-checkers are expected to play a crucial role in the upcoming elections. Their expertise in identifying and debunking false information is seen as invaluable in the fight against misinformation. The ECI is being urged to establish formal partnerships with reputable fact-checking organizations to leverage their skills and resources effectively 1.
As AI technology advances, the challenge of identifying and countering AI-generated misinformation becomes more complex. Fact-checkers and election officials will need to adapt their strategies to deal with increasingly sophisticated forms of false information 2. This may include developing new tools and techniques to detect AI-generated content and educating the public about the potential risks.
The 2024 elections are seen as a critical test for India's ability to manage the impact of technology on its democratic processes. The lessons learned and strategies developed during this election cycle will likely shape the approach to combating misinformation in future elections. As such, the ECI's actions and policies in the coming months will be closely watched by both domestic and international observers.
Reference
The Election Commission of India has issued an advisory requiring political parties to label AI-generated or synthetic content used in election campaigns, aiming to promote transparency and combat misinformation.
4 Sources
4 Sources
Artificial intelligence poses a significant threat to the integrity of the 2024 US elections. Experts warn about the potential for AI-generated misinformation to influence voters and disrupt the electoral process.
2 Sources
2 Sources
Meta claims that AI-generated content played a minimal role in election misinformation on its platforms in 2024, contrary to widespread concerns about AI's potential impact on global elections.
14 Sources
14 Sources
A comprehensive look at how AI technologies were utilized in the 2024 global elections, highlighting both positive applications and potential risks.
4 Sources
4 Sources
Recent studies reveal that AI-generated misinformation and deepfakes had little influence on global elections in 2024, contrary to widespread concerns. The limited impact is attributed to the current limitations of AI technology and users' ability to recognize synthetic content.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved