Deepfakes and political warfare
By: Alfredo López Ariza
In Pakistan, an imprisoned political leader appeared to address thousands of supporters. In Slovakia, a candidate was heard plotting to rig an election. In the United States, a president’s voice urged voters not to go to the polls. None of it was real. Technology now evolves faster than our ability to distinguish truth from fabrication—and faster still than the capacity of legal systems to keep pace.
Deepfakes—synthetic audio and video capable of replicating faces and voices with uncanny precision—have shifted from innovation to immediate threat. What began as an experiment in artificial intelligence has become a potent instrument of distortion, capable of reshaping elections, destroying reputations, and upending our shared sense of reality. In recent years, these tools have been deployed across continents to manipulate campaigns and discredit opponents.
During Pakistan’s 2023–2024 campaign, the PTI party used AI to allow Imran Khan to “appear” at rallies from prison, demonstrating that synthetic manipulation can amplify as effectively as it deceives. In Slovakia, a fabricated audio clip of Michal Šimečka allegedly plotting electoral fraud spread just 48 hours before voters went to the polls—too late to refute without magnifying the lie. In the United Kingdom, a deepfake video of Keir Starmer insulting his staff circulated on the eve of the Labour Party conference. And in the United States’ 2024 primaries, a robocall mimicking Joe Biden’s voice urged New Hampshire voters to stay home: voter suppression through synthetic persuasion.
These incidents are not anomalies; they mark a new frontier in digital influence—one that Latin America and the Caribbean, with their polarized politics and fragile institutions, are now confronting directly. In the Dominican Republic, a deepfake targeting former senator and current Interior Minister Faride Raful exposed another dimension of this phenomenon: synthetic media used not to influence elections, but to inflict reputational harm as a form of political violence.
Beyond domestic manipulation, deepfakes are transforming the landscape of political warfare. The same technology that enables local actors to distort narratives can be exploited by foreign states seeking to interfere in elections for geopolitical gain. Synthetic media offers adversaries a low-cost, high-impact means to amplify disinformation campaigns that destabilize public trust and democratic legitimacy—all while maintaining plausible deniability. Intelligence assessments already warn of coordinated influence operations deploying AI-generated content to simulate local voices, fabricate scandals, and inflame polarization during critical electoral cycles.
This evolution extends into the realm of irregular and psychological warfare. Military and intelligence units increasingly recognize the potential of deepfakes as tools for psyops—operations designed to demoralize opponents, fabricate battlefield events, or manipulate perception during crises. In asymmetric conflicts, a single synthetic video can achieve the disruptive effect once requiring vast propaganda machinery, blurring the line between defense strategy and digital deception.
As Hany Farid, a digital forensics expert at the University of California, Berkeley, and co-author of On the Threat of Deep Fakes to Democracy and Society (Konrad Adenauer Stiftung, 2020), warns, the danger lies not only in deception itself, but in the erosion of confidence in authentic information. When every image or voice may be false, doubt becomes a weapon, and truth is a matter of perpetual suspicion.
Latin America is now racing to regulate this emerging threat. Brazil leads with electoral court rules banning deepfakes in campaign propaganda and requiring AI-generated content to be labeled, while Chile advances a comprehensive AI bill targeting manipulative synthetic media. Mexico enforces its Ley Olimpia to prosecute AI-generated sexual imagery and debates new federal penalties. Argentina’s proposed bills would outlaw pornographic deepfakes and impose disclosure requirements. Colombia’s 2025 Law 2502 defines deepfakes in criminal law and increases penalties for AI-enabled identity fraud. Peru has introduced aggravating factors for crimes involving artificial intelligence and is considering pre-election labeling mandates. The Dominican Republic’s new Penal Code approaches the issue indirectly, criminalizing the dissemination of “false or altered” images, audio, or video that damage reputation, with penalties of up to ten years for intimate or extortion-related content.
Together, these measures reveal an emerging regional consensus: deepfakes are no longer a technological curiosity but a direct threat to democratic integrity, privacy, and trust. Governments are converging on three priorities—protecting elections from AI-manipulated content, extending sexual-image protections to synthetic material, and penalizing AI-driven impersonation and fraud. Yet these frameworks risk remaining symbolic unless they are accompanied by forensic capacity, cross-border cooperation, and specialized training to detect and prosecute synthetic manipulation.
As technology continues to outpace regulation and ethics, policymakers must learn to anticipate rather than merely react. To prevent the fight against disinformation from mutating into a government-sanctioned “Ministry of Truth,” verification must involve academia, the media, and independent oversight bodies. In a world of artificial voices and faces, democracy depends not only on belief, but on verification—and on who we can still trust to affirm: this is real.



.jpg)
