In an age where digital platforms often shape geopolitical narratives more than diplomats or soldiers, the NATO Strategic Communications Centre’s Virtual Manipulation Brief 2025 reads like a chilling dispatch from a new kind of battlefield—one dominated not by tanks or missiles but by bots deepfakes, and algorithmically amplified lies.
The 2025 edition marks a significant leap in scope and urgency. Drawing from over 11 million social media posts across ten platforms—including X (formerly Twitter), Telegram, YouTube, and VK—the report exposes the increasingly sophisticated, AI-accelerated information warfare campaigns by Russia and, to a growing extent, China. The conclusion is blunt: hostile digital influence is no longer experimental. It is industrial.
The Kremlin’s Digital Doctrine
The Kremlin’s disinformation playbook has evolved dramatically. It now fuses high-speed automation with emotional propaganda, exploiting not just Western divisions but also Western tools. Platforms like X and Telegram are central to these operations—X for reach and Telegram for narrative depth.
The data show a clear asymmetry: Kremlin-aligned posts, especially reposts, vastly outnumber pro-Western content, particularly in cross-platform bursts. While only 18.7% of Kremlin content is original, it garners far higher engagement. It is no accident. Russia has perfected an architecture of amplification that mimics legitimacy—98–99% of pro-Kremlin accounts on X recycle content, creating what the report calls an “amplification swarm.” Their goal is clear: flood the zone with noise, drown the truth, and wear down resistance.
Narrative peaks align with geopolitical events, such as Zelenskyy’s meetings, NATO summits, or Trump’s statements. From “Traditional Values Defender” to “Nuclear Threat Rhetoric,” these information bursts vilify NATO, delegitimise Ukraine, and present Russia as a misunderstood strongman forced into war.
AI as Enabler—and Multiplier
One of the most striking aspects of the report is the role of artificial intelligence—not just in detection, but in deception. Hostile actors now use AI to generate realistic videos, automate accounts, and even create fabricated influencers. Platforms like X have enabled bots like Grok to engage users in real time, creating the illusion of unbiased opinions. But these AI systems are vulnerable to manipulation and hallucination—offering fertile ground for malign influence.
Worse, AI isn’t just accelerating message delivery—it’s reshaping narrative strategy. Pro-Russian and pro-Chinese actors increasingly use AI to tailor messages across languages, platforms, and emotional registers. On TikTok, for instance, pre-election content warned of nuclear war and painted NATO as Ukraine’s saviour. Post-election, the narrative flipped—praising a potential Trump-Putin alliance and demonising NATO as a deep-state warmonger.
The China Factor: Strategic Calm, Tactical Chaos
While Russia uses emotion and aggression, China’s disinformation strategy is subtle, disciplined, and semantic. Its criticism of NATO’s Indo-Pacific engagement relies on framing the alliance as a relic of Cold War paranoia. Phrases like “ideological bias,” “destabilise,” and “zero-sum approach” are consistently used to construct a narrative of Western overreach.
Chinese-aligned actors replicate Russia’s playbook—coordinated cross-platform bursts, seeding identical talking points across TikTok, Telegram, YouTube, and X. The tone, however, is less shrill and more insinuative. Beijing’s messaging doesn’t just critique NATO—it seeks to reframe global perceptions of legitimacy and order.
US Election Effect
A key turning point came after the 2024 US elections. The volume of negotiation-themed posts spiked, with both pro- and anti-Kremlin narratives intensifying. Kremlin accounts painted the election as rigged and NATO as complicit, while Western narratives became more urgent, highlighting the need to defend Ukraine and counter Russian aggression.
Interestingly, pro-Kremlin narratives often exploited American voices critical of Ukraine—echoing and amplifying dissent from within the West itself. Elon Musk’s statements on X, for instance, were weaponised by Russian accounts post-election to sow doubt and portray a crumbling Western consensus.
The Cost of Complacency
The implications are profound. These aren’t random trolls. They are coordinated digital combatants executing real-time propaganda campaigns with the help of AI. The intent is not merely to mislead but to exhaust—draining public trust, polarising societies, and paralysing democratic decision-making.
The report’s recommendations are sobering. Governments must develop platform-specific countermeasures, coordinate counter-narratives, and map narrative environments with rapid response capabilities. Above all, public media literacy must be urgently scaled to blunt the impact of algorithmically driven manipulation.
Ignore At Your Peril
The information war is no longer a metaphor. It is a strategic reality reshaping the global order in real-time. If the West fails to confront this threat with the seriousness it demands, the cost will not be merely informational. It will be geopolitical.
In this war, truth is the first casualty. But unlike kinetic wars, this one doesn’t need to destroy infrastructure—it only needs to destroy faith. And that is a far graver loss.
Ramananda Sengupta
In a career spanning three decades and counting, I’ve been the foreign editor of The Telegraph, Outlook Magazine and the New Indian Express. I helped set up rediff.com’s editorial operations in San Jose and New York, helmed sify.com, and was the founder editor of India.com. My work has featured in national and international publications like the Al Jazeera Centre for Studies, Global Times and Ashahi Shimbun. My one constant over all these years, however, has been the attempt to understand rising India’s place in the world.
I can rustle up a mean salad, my oil-less pepper chicken is to die for, and it just takes some beer and rhythm and blues to rock my soul.