Here are expert tips on how to spot and protect yourself against them.
Last month, an old friend forwarded me a video that made my stomach drop. In it, what appeared to be violent protesters streaming down the streets of a major city, holding signs accusing the government and business officials of "censoring our voice online!"
The footage looked authentic. The audio was clear. The protest signs appeared realistically amateurish.
But it was completely fabricated.
That didn't make the video any less effective, though. If anything, its believability made it more dangerous. That single video had the power to shape opinions, inflame tensions, and spread across platforms before the truth caught up. This is the hallmark of a narrative attack: not just a falsehood, but a story carefully crafted to manipulate perception on a large scale.
Narrative attacks, as research firm Forrester defines them, are the new frontier of cybersecurity: AI-powered manipulations or distortions of information that exploit biases and emotions, like disinformation campaigns on steroids.
I use the term "narrative attacks" deliberately. Terms like "disinformation" feel abstract and academic, while "narrative attack" is specific and actionable. Like cyberattacks, narrative attacks demonstrate how bad actors exploit technology to inflict operational, reputational, and financial harm.
Also: Navigating AI-powered cyber threats in 2025: 4 expert security tips for businesses
Think of it this way: A cyber attack exploits vulnerabilities in your technical infrastructure. A narrative attack exploits vulnerabilities in your information environment, often causing financial, operational, or reputational harm. This article provides you with practical tools to identify narrative attacks, verify suspicious information, and safeguard yourself and your organization. We'll cover detection techniques, verification tools, and defensive strategies that work in the real world.
Several factors have created the ideal conditions for narrative attacks to flourish. These dynamics help explain why we're seeing such a surge right now:
Meanwhile, bad actors are testing new playbooks, combining traditional propaganda techniques with cutting-edge technology and cyber tactics to create faster, more targeted, and more effective manipulation campaigns.
Also: 7 ways to lock down your phone's security - before it's too late
"The incentive structures built into social media platforms benefit content that provokes controversy, outrage, and other strong emotions," said Jared Holt, an experienced extremism researcher who recently worked as an analyst for the Institute for Strategic Dialogue. Tech companies, he argued, rewarded engagement with inorganic algorithmic amplification to keep users on their services for longer periods, generating more profits.
"Unfortunately, this also created a ripe environment for bad actors who inflame civil issues and promote social disorder in ways that are detrimental to societal health," he added.
Today's narrative attacks blend familiar propaganda methods with emerging technologies. "Censorship" bait is a particularly insidious tactic. Bad actors deliberately post content designed to trigger moderation actions, then use those actions as "proof" of systematic suppression. This approach radicalizes neutral users who might otherwise dismiss extremist content.
Also: GPT-5 bombed my coding tests, but redeemed itself with code analysis
Coordinated bot networks have become increasingly sophisticated in mimicking human behavior. Modern bot armies use varied posting schedules, attempt to influence influencers, post diverse content types, and use realistic engagement patterns. They're much more complicated to detect than the automated accounts we saw in previous years.
Deepfake videos and AI-generated images have become remarkably sophisticated. We're seeing fake footage of politicians making inflammatory statements, synthetic images of protests that never happened, and artificial celebrity endorsements. The tools used to create this media are becoming increasingly accessible as the LLMs behind them evolve and become more capable.
Synthetic eyewitness posts combine fake personal accounts with geolocation spoofing. Attackers create seemingly authentic social media profiles, complete with personal histories and local details, and use them to spread false firsthand reports of events. These posts often include manipulated location data to make them appear more credible.
Agenda-driven amplification often involves fringe influencers and extremist groups deliberately promoting misleading content to mainstream audiences. They frequently present themselves as independent voices or citizen journalists while coordinating their messaging and timing to maximize their impact.
Also: Beware of promptware: How researchers broke into Google Home via Gemini
The list of conspiracy fodder is endless, and recycled conspiracies often get updated with contemporary targets and references. For example, the centuries-old antisemitic trope of secret cabals controlling world events has been repackaged in recent years to target figures like George Soros, the World Economic Forum, or even tech CEOs under the guise of "globalist elites." Another example is modern influencers transforming climate change denial narratives into "smart city" panic campaigns. Vaccine-related conspiracies adapt to target whatever technology or policy is currently controversial. The underlying frameworks remain consistent, but the surface details are updated to reflect current events.
During recent Los Angeles protests, conspiracy videos circulated claiming that foreign governments orchestrated the demonstrations. An investigation revealed that many of these videos originated from known narrative manipulation networks with ties to overseas influence operations. Ahead of last year's Paris Olympics, we saw narratives emerge about "bio-engineered athletes," potential "false flag" terrorist attacks, and other manipulations. These stories lack credible sources but spread rapidly through sports and conspiracy communities.
Fake local news sites have resurfaced across swing states, publishing content designed to look like legitimate journalism while promoting partisan talking points. These sites often use domain names similar to real, local newspapers to increase their credibility.
A recent viral video appeared to show a major celebrity endorsing a political candidate. Even after verification teams proved the footage had been manipulated, polls showed that many people continued to believe the endorsement was genuine. The false narrative persisted despite apparent debunking.
The most important thing you can do is slow down. Our information consumption habits make us vulnerable to manipulation. When you encounter emotionally charged content, especially if it confirms your existing beliefs or triggers strong reactions, pause before sharing.
Also: Syncable vs. non-syncable passkeys: Are roaming authenticators the best of both worlds?
"Always consider the source," says Andy Carvin, an intelligence analyst who recently worked for the Atlantic Council's Digital Forensic Research Lab. "While it's impossible to know the details behind every potential source you come across, you can often learn a lot from what they say and how they say it."
Do they speak in absolute certainties? Do they proclaim they know the "truth" or "facts" about something and present that information in black and white terms? Do they ever acknowledge that they don't have all the answers? Do they attempt to convey nuance? Do they focus on assigning blame to everything they discuss? What's potentially motivating them to make these claims? Do they cite their sources?
Media literacy has become one of the most critical skills for navigating our information-saturated world, yet it remains woefully underdeveloped across most demographics. Carvin suggests giving strong consideration to your media consumption habits. When scrolling or watching, ask yourself three critical questions: Who benefits from this narrative? Who is amplifying it? What patterns of repetition do you notice across different sources?
"It may not be possible to answer all of these questions, but if you put yourself in the right mindset and maintain a healthy skepticism, it will help you develop a more discerning media diet," he said.
Also: I found 5 AI content detectors that can correctly identify AI text 100% of the time
Before sharing content, try these tips:
Here are a few additional apps and websites that can guide you to authentic content. These verification tools should be used to supplement -- not replace -- human judgment and traditional verification methods. But they can help identify potential red flags, provide additional context, and point you toward reliable information.
The language you use when discussing false information significantly impacts how others perceive and respond to it. Poor communication can accidentally amplify the very narratives you're trying to counter. Here are a few approaches to try:
Traditional crisis communications strategies are insufficient for narrative attacks. Organizations need proactive defensive measures, not just reactive damage control.
Cultural media literacy requires systematic changes to how we teach and reward information sharing. Schools should integrate source evaluation and digital verification techniques into their core curricula, not just as separate media literacy classes. News organizations should prominently display correction policies and provide clear attribution for their reporting.
Also: Why AI-powered security tools are your secret weapon against tomorrow's attacks
Social media platforms should slow down the spread of viral content by introducing friction for sharing unverified claims. Professional associations across industries should establish standards for how their members communicate with the public about complex topics. Communities can organize local media literacy workshops that teach practical skills, such as identifying coordinated inauthentic behavior and understanding how algorithmic amplification works.
Implementation depends on making verification tools more accessible and building new social norms around information sharing. Browser extensions that flag questionable sources, fact-checking databases that journalists and educators can easily access, and community-driven verification networks can democratize the tools currently available only to specialists. We need to reward careful, nuanced communication over sensational claims and create consequences for repeatedly spreading false information. This requires both individual commitment to slower, more thoughtful information consumption and institutional changes that prioritize accuracy over engagement metrics.
Narrative attacks represent a fundamental shift in how information warfare operates, requiring new defensive skills from individuals and organizations alike. The verification tools, detection techniques, and communication strategies outlined here aren't theoretical concepts for future consideration but practical necessities for today's information environment. Success depends on building these capabilities systematically, training teams to recognize manipulation tactics, and creating institutional cultures that reward accuracy over speed.
Also: Yes, you need a firewall on Linux - here's why and which to use
The choice isn't between perfect detection and complete vulnerability but between developing informed skepticism and remaining defenseless against increasingly sophisticated attacks designed to exploit our cognitive biases and social divisions.