A viral campaign with a Russian doll structure is highlighting the dangers of synthetic media and manufactured outrage, says Marie Boran
Blogs
If you saw #HollywoodAgainstZelenskyy trending and felt a flicker of “wait, when did celebrities start dictating European foreign policy?”, good. That discomfort is your threat model working.
The posts push slick talking head clips of actors addressing the camera, urging European leaders to pressure Ukranian President Volodymyr Zelensky into accepting a peace deal framed in pro-Kremlin talking points. The problem is that the celebrities didn’t say it. The clips are doctored or repurposed, with manipulated audio and context, and they’re seeded by low-credibility accounts before being rapidly reposted.
This is the part that should worry Irish and European readers: it’s not a random hoax. Researchers say the tactics match a long-running influence operation they call ‘Matryoshka’, an apt name. Matryoshka are Russian nesting dolls: open one and there’s another inside, and another again, each layer hiding the one that matters. Online, the same logic applies. A small set of origin accounts posts the first ‘doll’, the doctored clip and a ready-made hashtag, and then a wider set of seemingly independent accounts amplifies it. By the time it reaches your feed, you’re mostly seeing the outer layers: reposts of reposts, commentary screenshots, ‘I’m only sharing’ disclaimers. The source is buried on purpose.
Why Hollywood? Because fame is an authentication layer. We’re trained to treat recognisable faces as shorthand for trust, even when the message is nonsense. This is not The Terminator; it’s a boring supply chain: a clip scraped from a legitimate platform, a cheap voice model, a hashtag, and an algorithm that rewards outrage. The fake celebrity endorsement is just packaging designed to make a geopolitical demand feel like pop-culture consensus.
Platforms will tell you they removed the content, and in fairness, some clips do get taken down. But removal after a few hundred shares is like recalling a leaflet after it’s been dropped in every letterbox. The real distribution happens in the first few hours, when the content is novel, the captions are identical, and nobody wants to be the person who says “wait, this looks fake” without receipts.
This is where Europe’s regulation has to become less abstract. A coordinated deepfake campaign is the textbook case: engineered virality, synthetic media, and a narrative designed to bend democratic decision-making while keeping authorship plausibly deniable behind layers of fabricated organic chatter.
So what does meaningful resilience look like, at user level, without becoming a full-time open source intelligence analyst?
First: treat hashtags like smoke, not proof. A coordinated campaign wants you to think “everyone is saying this”, when it may be a network on a schedule. Second: look for provenance. Is the clip on the celebrity’s verified account? If not, assume it’s manipulation, not evidence. Third: slow down. Disinformation is optimised for speed; scepticism is a latency tax.


