Curated by THEOUTPOST
On Wed, 16 Oct, 12:02 AM UTC
3 Sources
[1]
AI bots easily bypass some social media safeguards, study reveals
While artificial intelligence (AI) bots can serve a legitimate purpose on social media -- such as marketing or customer service -- some are designed to manipulate public discussion, incite hate speech, spread misinformation or enact fraud and scams. To combat potentially harmful bot activity, some platforms have published policies on using bots and created technical mechanisms to enforce those policies. But are those policies and mechanisms enough to keep social media users safe? Research from the University of Notre Dame analyzed the AI bot policies and mechanisms of eight social media platforms: LinkedIn, Mastodon, Reddit, TikTok, X (formerly known as Twitter) and Meta platforms Facebook, Instagram and Threads. Then researchers attempted to launch bots to test bot policy enforcement processes. Their research is published on the arXiv preprint server. The researchers successfully published a benign "test" post from a bot on every platform. "As computer scientists, we know how these bots are created, how they get plugged in and how malicious they can be, but we hoped the social media platforms would block or shut the bots down and it wouldn't really be a problem," said Paul Brenner, a faculty member and director in the Center for Research Computing at Notre Dame and senior author of the study. "So we took a look at what the platforms, often vaguely, state they do and then tested to see if they actually enforce their policies." The researchers found that the Meta platforms were the most difficult to launch bots on -- it took multiple attempts to bypass their policy enforcement mechanisms. Although the researchers racked up three suspensions in the process, they were successful in launching a bot and posting a "test" post on their fourth attempt. The only other platform that presented a modest challenge was TikTok, due to the platform's frequent use of CAPTCHAs. But three platforms provided no challenge at all. "Reddit, Mastodon and X were trivial," Brenner said. "Despite what their policy says or the technical bot mechanisms they have, it was very easy to get a bot up and working on X. They aren't effectively enforcing their policies." As of the study's publishing date, all test bot accounts and posts were still live. Brenner shared that interns, who had only a high school-level education and minimal training, were able to launch the test bots using technology that is readily available to the public, highlighting how easy it is to launch bots online. Overall, the researchers concluded that none of the eight social media platforms tested are providing sufficient protection and monitoring to keep users safe from malicious bot activity. Brenner argued that laws, economic incentive structures, user education and technological advances are needed to protect the public from malicious bots. "There needs to be U.S. legislation requiring platforms to identify human versus bot accounts because we know people can't differentiate the two by themselves," Brenner said. "The economics right now are skewed against this as the number of accounts on each platform are a basis of marketing revenue. This needs to be in front of policymakers." To create their bots, researchers used Selenium, which is a suite of tools for automating web browsers, and OpenAI's GPT-4o and DALL-E 3. The research was led by Kristina Radivojevic, a doctoral student at Notre Dame.
[2]
Social media policies are no match for AI bots
Social media platforms aren't doing enough to stop harmful AI bots, research finds. While artificial intelligence (AI) bots can serve a legitimate purpose on social media -- such as marketing or customer service -- some are designed to manipulate public discussion, incite hate speech, spread misinformation, or enact fraud and scams. To combat potentially harmful bot activity, some platforms have published policies on using bots and created technical mechanisms to enforce those policies. But are those policies and mechanisms enough to keep social media users safe? The new research analyzed the AI bot policies and mechanisms of eight social media platforms: LinkedIn, Mastodon, Reddit, TikTok, X (formerly known as Twitter), and Meta platforms Facebook, Instagram, and Threads. Then researchers attempted to launch bots to test bot policy enforcement processes. The researchers successfully published a benign "test" post from a bot on every platform. "As computer scientists, we know how these bots are created, how they get plugged in, and how malicious they can be, but we hoped the social media platforms would block or shut the bots down and it wouldn't really be a problem," says Paul Brenner, a faculty member and director in the Center for Research Computing at the University of Notre Dame and senior author of the study. "So we took a look at what the platforms, often vaguely, state they do and then tested to see if they actually enforce their policies." The researchers found that the Meta platforms were the most difficult to launch bots on -- it took multiple attempts to bypass their policy enforcement mechanisms. Although the researchers racked up three suspensions in the process, they were successful in launching a bot and posting a "test" post on their fourth attempt. The only other platform that presented a modest challenge was TikTok, due to the platform's frequent use of CAPTCHAs. But three platforms provided no challenge at all. "Reddit, Mastodon, and X were trivial," Brenner says. "Despite what their policy says or the technical bot mechanisms they have, it was very easy to get a bot up and working on X. They aren't effectively enforcing their policies." As of the study's publishing date, all test bot accounts and posts were still live. Brenner shared that interns, who had only a high school-level education and minimal training, were able to launch the test bots using technology that is readily available to the public, highlighting how easy it is to launch bots online. Overall, the researchers concluded that none of the eight social media platforms tested are providing sufficient protection and monitoring to keep users safe from malicious bot activity. Brenner argued that laws, economic incentive structures, user education, and technological advances are needed to protect the public from malicious bots. "There needs to be US legislation requiring platforms to identify human versus bot accounts because we know people can't differentiate the two by themselves," Brenner says. "The economics right now are skewed against this as the number of accounts on each platform are a basis of marketing revenue. This needs to be in front of policymakers." To create their bots, researchers used Selenium, which is a suite of tools for automating web browsers, and OpenAI's GPT-4o and DALL-E 3. The research appears as a pre-print on ArXiv. This pre-print paper has not undergone peer review and its findings are preliminary.
[3]
Social media platforms aren't doing enough to st | Newswise
Paul Brenner, Director in the Center for Research Computing at the University of Notre Dame While artificial intelligence (AI) bots can serve a legitimate purpose on social media -- such as marketing or customer service -- some are designed to manipulate public discussion, incite hate speech, spread misinformation or enact fraud and scams. To combat potentially harmful bot activity, some platforms have published policies on using bots and created technical mechanisms to enforce those policies. But are those policies and mechanisms enough to keep social media users safe? New research from the University of Notre Dame analyzed the AI bot policies and mechanisms of eight social media platforms: LinkedIn, Mastodon, Reddit, TikTok, X (formerly known as Twitter) and Meta platforms Facebook, Instagram and Threads. Then researchers attempted to launch bots to test bot policy enforcement processes. The researchers successfully published a benign "test" post from a bot on every platform. "As computer scientists, we know how these bots are created, how they get plugged in and how malicious they can be, but we hoped the social media platforms would block or shut the bots down and it wouldn't really be a problem," said Paul Brenner, a faculty member and director in the Center for Research Computing at Notre Dame and senior author of the study. "So we took a look at what the platforms, often vaguely, state they do and then tested to see if they actually enforce their policies." The researchers found that the Meta platforms were the most difficult to launch bots on -- it took multiple attempts to bypass their policy enforcement mechanisms. Although the researchers racked up three suspensions in the process, they were successful in launching a bot and posting a "test" post on their fourth attempt. The only other platform that presented a modest challenge was TikTok, due to the platform's frequent use of CAPTCHAs. But three platforms provided no challenge at all. "Reddit, Mastodon and X were trivial," Brenner said. "Despite what their policy says or the technical bot mechanisms they have, it was very easy to get a bot up and working on X. They aren't effectively enforcing their policies." As of the study's publishing date, all test bot accounts and posts were still live. Brenner shared that interns, who had only a high school-level education and minimal training, were able to launch the test bots using technology that is readily available to the public, highlighting how easy it is to launch bots online. Overall, the researchers concluded that none of the eight social media platforms tested are providing sufficient protection and monitoring to keep users safe from malicious bot activity. Brenner argued that laws, economic incentive structures, user education and technological advances are needed to protect the public from malicious bots. "There needs to be U.S. legislation requiring platforms to identify human versus bot accounts because we know people can't differentiate the two by themselves," Brenner said. "The economics right now are skewed against this as the number of accounts on each platform are a basis of marketing revenue. This needs to be in front of policymakers." To create their bots, researchers used Selenium, which is a suite of tools for automating web browsers, and OpenAI's GPT-4o and DALL-E 3. The research, published as a pre-print on ArXiv, was led by Kristina Radivojevic, a doctoral student at Notre Dame, and supported by CRC student interns Catrell Conley, Cormac Kennedy and Christopher McAleer.
Share
Share
Copy Link
A University of Notre Dame study exposes the vulnerability of major social media platforms to AI bot infiltration, raising concerns about user safety and the need for stronger regulations.
A recent study conducted by researchers at the University of Notre Dame has revealed alarming vulnerabilities in the defenses of major social media platforms against artificial intelligence (AI) bots. The research, led by doctoral student Kristina Radivojevic and published as a pre-print on ArXiv, analyzed the AI bot policies and enforcement mechanisms of eight popular social media platforms: LinkedIn, Mastodon, Reddit, TikTok, X (formerly Twitter), and Meta's Facebook, Instagram, and Threads 1.
The researchers attempted to launch benign "test" bots on each platform to evaluate the effectiveness of their bot policy enforcement processes. Surprisingly, they successfully published test posts from bots on all eight platforms, exposing significant gaps in their security measures 2.
Among the platforms tested, Meta's services (Facebook, Instagram, and Threads) proved to be the most challenging for bot deployment. It took researchers multiple attempts and three suspensions before successfully launching a bot on their fourth try. TikTok presented moderate difficulty due to its frequent use of CAPTCHAs 3.
The study revealed that platforms like Reddit, Mastodon, and X (formerly Twitter) were "trivial" to infiltrate with bots. Paul Brenner, a faculty member and director at Notre Dame's Center for Research Computing, expressed concern about the ease with which bots could be created and deployed on these platforms, despite their stated policies 1.
To create their test bots, the researchers employed readily available technologies:
The study highlights several critical issues:
Brenner argues that a multi-faceted approach is necessary to address this issue:
This research underscores the urgent need for improved security measures and regulatory oversight in the rapidly evolving landscape of social media and AI technology.
Reference
[2]
Research reveals how foreign actors are using advanced AI techniques to create fake accounts and spread disinformation on social media platforms, potentially influencing public opinion and election outcomes.
3 Sources
3 Sources
Meta's vision to populate its social media platforms with AI-generated profiles has sparked debate about the future of social networking and user engagement.
22 Sources
22 Sources
Meta's plan to introduce AI-generated personas on Facebook and Instagram sparks debate about authenticity, user engagement, and the future of social media interactions.
16 Sources
16 Sources
Researchers from the University of Zurich conducted a secret AI-powered experiment on Reddit's r/ChangeMyView subreddit, raising serious ethical concerns and prompting discussions about AI's persuasive capabilities and research ethics.
22 Sources
22 Sources
Research from Clemson University reveals a coordinated AI campaign using large language models to spread political propaganda on X, supporting Republican candidates and causes.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved