Russia and other states are tightening regulation of deepfakes as they become a national security issue rather than a technological curiosity
For most of modern history, “big politics” operated in conditions of information scarcity and an excess of interpretation. The digital age has flipped that equation. Today we face a scarcity of authenticity and an excess of content. Deepfakes – fabricated videos and images, often with audio, generated by artificial intelligence – are cheap and capable of undermining the most basic foundation of social interaction: trust in public speech and visual evidence.
The internet is now saturated with such material. Surveys suggest that roughly 60% of people have encountered a deepfake video in the past year. Some of these creations are harmless or absurd, like exaggerated AI images of nine-story snowdrifts in Kamchatka that even circulated in the United States. But the technology is increasingly feeding serious political tension.
The Indo-Pakistani crisis in May 2025 illustrated this danger. A single fabricated video purporting to show the loss of two fighter jets spread online within hours, inflaming public sentiment, fueling military rhetoric and accelerating escalation faster than official denials could contain it. Deepfakes have thus moved from the realm of entertainment into that of national security.
It is no coincidence that late 2025 and early 2026 saw a wave of new regulations. States are beginning to treat AI fakes not as a novelty, but as a destabilizing factor. The global trend is toward control, enforcement, and coercive measures.
In countries often described as part of the “global majority,” the emphasis is on swift law enforcement. On January 10, Indonesia temporarily blocked access to Grok after the platform was used to create sexualized and unauthorized deepfakes. Jakarta’s response showed a readiness to cut off distribution channels immediately in cases of mass abuse, rather than waiting for lengthy standard-setting processes.
Vietnam offers an even clearer example of a criminal-law approach. At the end of 2025, authorities issued arrest warrants and conducted a trial in absentia against two citizens accused of systematically distributing “anti-state” materials, including AI-generated images and videos. Hanoi did not treat the cross-border nature of the publications as immunity. Instead, it framed deepfakes as an issue of digital sovereignty. In this view, the digital sphere is no longer a space where evidence can be fabricated and institutions discredited from abroad without consequence. The state has signaled its willingness to extend criminal law into the global digital environment.
Deepfake use is also shifting in character. Increasingly, AI manipulation is used for rapid, localized attacks on trust rather than complex special operations. On January 19, Indian police opened an investigation into a viral AI-generated image designed to discredit a local administration and provoke unrest. The aim was not strategic deception, but immediate social destabilization.
The European Union has already institutionalized its response. On December 17, the European Commission published the first draft of a Code of Practice on the labelling and identification of AI-generated content. This document translates the AI Act’s transparency principles into enforceable procedures: machine-readable labels, disclosure of AI generation, and formalized platform responsibilities. Deepfakes are increasingly framed as a form of “digital violence.” On January 9, Germany’s Justice Ministry announced measures against malicious AI image manipulation, moving the issue from ethical debate into criminal law and personal protection.
The United States has focused on platform responsibility. In 2025, the Take It Down Act, signed by President Donald Trump, required platforms to quickly remove unauthorized intimate images and their AI-generated equivalents. In January, the Senate passed the DEFIANCE Act, granting victims the right to sue creators or distributors of deepfakes. Congress continues to debate the No Fakes Act, which would establish federal rights over the use of a person’s visual or voice likeness. Yet the American model remains fragmented, shaped by constitutional constraints and federalism, with many rules emerging at state level.
Russia is developing its own path. On January 20, Digital Development Minister Maksut Shadayev created a working group to combat illegal deepfake use, bringing together ministry officials and parliamentarians to draft legislative proposals and strengthen accountability. Earlier, in November 2025, a bill was introduced to amend the law “On Information, Information Technologies and Information Protection,” requiring mandatory labelling of video materials created or modified using AI. A related draft law proposes administrative penalties for missing or inaccurate labels. The State Duma’s IT committee plans a first reading in March 2026.
At the international level, outside Western “club” formats, two pragmatic channels remain. One is the development of technological standards for verifying content origin, such as C2PA (Content Credentials), an open industry ecosystem already adopted by major IT firms to label and verify media sources. The other lies in universal multilateral platforms like the International Telecommunication Union, where discussions on AI transparency continue. Only such neutral formats have a chance of producing inclusive standards that do not turn deepfake regulation into another instrument of geopolitical pressure or digital fragmentation.
The world is approaching a moment when systematic verification of authenticity in public communication will become routine in politics. Governments increasingly view synthetic content as a threat to elections and social stability. Not to mention trust in institutions. At the same time, divergent legal regimes and different views on freedom of expression will generate conflicts of jurisdiction.
For states pursuing digital sovereignty, the regulation of deepfakes is becoming a test of their ability to adapt quickly and thoughtfully to a new information environment. The struggle is no longer simply about technology. It is about preserving the possibility of “genuine politics” in an age when seeing is no longer believing.
This article was first published by Kommersant, and was translated and edited by the RT team.