Inside the CopyCop Playbook: How to Fight Back in the Age of Synthetic Media
Key Takeaways
- CopyCop is scaling AI-driven influence operations globally. The Russian influence network known as CopyCop has created more than 300 fake media websites spanning North America, Europe, and beyond. The operation primarily uses AI-generated content to erode public trust and support for Ukraine.
- AI has become the new engine of manipulation. The network uses self-hosted large language models (LLMs) to mass-produce fabricated news stories, deepfakes, and fake fact-checking sites that imitate legitimate journalism.
- Transparency and intelligence are the best defenses. Governments, newsrooms, and enterprises can counter these operations through domain monitoring, content verification, and proactive intelligence sharing.
The Rise of CopyCop: When Influence Operations Go Fully Digital
The latest Insikt Group report exposes one of the most expansive Russian influence operations to date: a network known as CopyCop, also known as Storm-1516.
Since early 2025, CopyCop has quietly deployed more than 300 inauthentic websites disguised as local news outlets, political parties, and even fact-checking organizations. These sites have appeared across North America, Europe, and other regions including Armenia, Moldova, and parts of Africa.
What sets CopyCop apart from earlier influence operations is its large-scale use of artificial intelligence. The network relies on self-hosted LLMs, specifically uncensored versions of a popular open-source model, to generate and rewrite content at scale. Thousands of fake news stories and “investigations” are produced and published daily, blending factual fragments with deliberate falsehoods to create the illusion of credible journalism.
The result is a disinformation ecosystem that looks and behaves like legitimate news. Its purpose is to advance Russia’s geopolitical objectives and erode Western support for Ukraine.
Inside the Playbook: How the Operation Works
Fake Outlets, Real Impact
CopyCop operates a vast web of cloned domains and mirrored subdomains designed to imitate legitimate media outlets. Many adopt regional branding and familiar naming conventions to appear authentic at first glance.
Each site is part of a distributed infrastructure built to withstand disruption and survive takedowns. When one domain is taken offline, mirrored copies appear elsewhere, often hosted on the same IP ranges. This illusion of legitimacy enables CopyCop’s stories to infiltrate online discussions, social media feeds, and even search results.
AI-Generated “Journalism” at Scale
CopyCop’s reliance on self-hosted LLMs marks a new phase in influence tradecraft. These models generate articles that weave together real and fabricated details, complete with bylines, quotations, and the stylistic cues of legitimate reporting.
Insikt Group researchers identified text artifacts that confirm AI authorship, including telltale phrases such as:
“Please note that this rewrite aims to provide a clear and concise summary of the original text while maintaining key details.”
“The tone is objective and factual, focusing on the information presented in the intelligence report.”
The models, fine-tuned on Russian state media sources, generate plausible articles in multiple languages, dramatically expanding CopyCop’s reach.
Narrative Engineering and Manipulation
At its core, CopyCop pursues a familiar objective: erode support for Ukraine and deepen political fragmentation in Western countries backing Ukraine. Its content routinely targets Western leaders, institutions, and media. Recent campaigns include:
- Forged “leaked documents” alleging that Ukrainian officials misused Western aid or media funding.
- Deepfake videos falsely accusing Armenian officials of abuse and fabricated stories portraying French leaders as corrupt or politically repressive.
- Impersonation of French and Moldovan media outlets to publish fabricated corruption and election-interference stories.
- Inauthentic websites and social media accounts promoting pro-independence sentiment and amplifying domestic polarization in Canada’s Alberta province.
Each narrative is engineered to exploit local grievances and political divisions. The stories are then amplified through a secondary ecosystem of Telegram channels, YouTube accounts, and other pro-Russian influencers such as InfoDefense and Portal Kombat to create the illusion of organic consensus.
Poisoning the Information Well
By flooding the internet with synthetic “news,” CopyCop contaminates data sources that LLMs, search engines, and AI assistants rely on to generate answers. This deliberate poisoning strategy ensures that false narratives are not only consumed by people, but also ingested by algorithms. As Insikt Group warns, this strategy threatens the integrity of the global information supply chain.
From Awareness to Action: A Mitigation Playbook
The CopyCop report makes one thing clear: identifying influence operations is only half the battle. The next step is building resilience so that governments, newsrooms, enterprises, and individuals can recognize, counter, and contain foreign malign influence before it spreads.
For Governments
- Monitor domain registrations and hosting infrastructure to detect clusters of inauthentic media sites before they gain traction.
- Integrate threat intelligence feeds into election-security and information-integrity programs to identify early signs of coordinated activity.
- Coordinate across allied governments to share indicators of cross-border disinformation infrastructure.
For Newsrooms and Media Organizations
- Strengthen verification workflows to detect AI-generated text, deepfakes, and synthetic imagery.
- Use threat intelligence insights to identify look-alike domains that mimic legitimate outlets.
- Train editorial staff to recognize telltale signs of LLM-generated content and suspicious bylines.
For Enterprises
- Deploy brand-intelligence monitoring to uncover impersonation campaigns targeting executives, employees, or products.
- Develop incident-response plans to address influence operations and protect organizational reputation.
- Communicate proactively and transparently when false narratives arise to maintain credibility and public trust.
For Everyone
- Practice verification before amplification, questioning sources before sharing.
- Support transparency and accountability across online ecosystems, reinforcing the social norms that sustain truth.
Defending Truth in the Age of Synthetic Influence
As generative AI becomes pervasive, adversaries will continue to weaponize it to shape perception, distort reality, and undermine democratic institutions. Defending against these threats demands proactive intelligence, cross-sector collaboration, and a renewed commitment to information integrity.
Insikt Group continues to expose and analyze these operations, helping governments, enterprises, and media organizations understand how influence networks evolve and how to defend against them before they take root. Read the report in full to learn more.