Threat Intelligence

How AI-Generated Content is Making Coordinated Manipulation Harder to Detect and What to Do About It

How AI-Generated Content is Making Coordinated Manipulation Harder to Detect and What to Do About It

The Cost of Running an Influence Operation Has Collapsed

Until recently, running a sophisticated coordinated influence operation required significant resources. Human writers to produce varied content at scale. Graphic designers to create supporting imagery. Translators for multilingual campaigns. Video producers for video-based disinformation. The resource requirement created a natural barrier to entry that limited sophisticated operations to well-funded actors.

Generative AI has removed most of that barrier. A single operator with access to consumer AI tools can now produce thousands of varied, high-quality posts in multiple languages, generate synthetic imagery and video, create AI voiceovers that pass cursory inspection, and vary content templates rapidly enough to evade keyword-based detection. The cost has dropped from hundreds of thousands of dollars to effectively zero.

What This Means for Detection

The conventional markers of inauthentic content are increasingly unreliable. Grammar and spelling errors that once indicated non-native or automated content are no longer present in AI-generated material. Repetitive phrasing that once revealed template replication is now varied enough to evade string-matching detection. Profile images generated by AI look indistinguishable from authentic photographs to human reviewers.

During a 2025 pharmaceutical product launch, AI Uniti analysis found that 64% of the TikTok videos in a coordinated anti-vaccine disinformation campaign used AI-generated voiceovers. The content quality was high enough that it was being shared by authentic users who had no idea they were amplifying synthetic material.

Why Behavioural Detection Remains Reliable

Content-based detection is becoming less reliable as generative AI improves. But behavioural detection remains robust against AI-generated content, because the behaviour of coordinated accounts does not change based on how their content is produced.

Accounts that are acting in coordination still post with synchronised timing. They still amplify each other before organic engagement occurs. They still follow recognisable network structures with core seeding accounts and amplification layers. These behavioural signatures are not masked by high-quality content.

This is why the detection methodology that AI Uniti applies focuses on temporal and behavioural patterns rather than content analysis. Whether the content is human-written or AI-generated, the network behaviour of a coordinated campaign follows predictable patterns that are detectable before the campaign reaches scale.

The Practical Implication for Enterprise Monitoring

If your current monitoring capability relies primarily on content signals, keyword matching, or sentiment analysis, it is increasingly vulnerable to AI-generated coordinated campaigns. The content will pass inspection. The sentiment will look organic. The keywords will be absent.

The detection needs to move to the network level. Who is posting, when, in what sequence, with what amplification pattern, and connected to what other accounts. These questions are answerable regardless of whether the content was written by a human or a language model.

The threat landscape is changing faster than most enterprise monitoring capabilities are adapting. The organisations that will be most exposed in 2026 and beyond are those that are still relying on the detection approaches that were designed for a pre-generative-AI world.

March 16, 2026

See what your community is saying

Explore live sentiment signals and trends from your own data to understand what’s resonating, what’s changing, and where attention is needed.

Try it