Democracies have always lived with propaganda, rumors and biased media. What is new is the speed, the scale and now the realism of manipulated content. When anyone, with a modest budget and basic skills, can create a convincing video of a president declaring war or a candidate confessing to a crime, the core democratic mechanism — informed consent through the ballot box — is under direct pressure.
The question is no longer whether disinformation and deepfakes will affect public opinion and elections. They already do. The real issue is whether democracies can adapt quickly enough to survive this new information environment without sacrificing their core freedoms.
From spin to synthetic reality: what has really changed?
Political communication has always involved framing and selective use of facts. But three shifts create a new landscape:
- Cost of manipulation has collapsed – What required studios, actors and significant budgets can now be done on a laptop with open-source tools or online services.
- Distribution is frictionless – Social networks and messaging apps allow instant propagation to millions, often segmented by micro-targeting.
- Verification is slower than virality – Fact-checkers and institutions respond in hours or days; a viral video needs minutes to shape a narrative.
Deepfakes — hyper-realistic synthetic audio or video based on AI models — crystallize these three trends. They are not just another form of “fake news”; they attack the evidence we rely on to decide what is real. When “seeing is no longer believing”, the entire epistemic infrastructure of democracy is at risk.
How deepfakes are already testing democracies
Several recent episodes illustrate both the power and the current limits of deepfake-based disinformation.
In 2022, during Russia’s invasion of Ukraine, a deepfake video of President Volodymyr Zelensky calling on Ukrainian troops to lay down their arms circulated on social networks. The video was clumsy by current standards and quickly debunked, but it was a proof of concept: in wartime, a few minutes of confusion can matter.
In early 2024, ahead of various national elections, security teams from major platforms flagged coordinated campaigns using AI-generated audio to imitate candidates’ voices in robocalls or WhatsApp messages, urging voters to boycott or change their vote. In several cases, these operations targeted diaspora communities or specific language groups where traditional media scrutiny was low.
The pattern is emerging:
- Targeted timing – Deepfakes released just before major events: debates, voting day, market-moving announcements.
- Plausible scandal – The content is crafted to be emotionally charged but credible: racist comments, secret deals, personal hypocrisy.
- Fragmented impact – Often, the goal is not to convince everyone, but to sway or demobilize a narrow segment, enough to tip a close election.
So far, the most widely reported cases have been detected quickly and did not overturn elections. But this may be a false sense of security. Attackers are learning, tools are improving, and the next wave will be more subtle, multilingual and tailored to local political culture.
Why democracies are structurally vulnerable
Authoritarian regimes also face disinformation and deepfakes, but they have tools most democracies (fortunately) do not: strict control over media, systematic censorship, and often one dominant narrative enforced by law.
Democracies, by design, combine three vulnerabilities:
- Pluralism of voices – Multiple parties, media and influencers constantly compete to shape the narrative. This noisy environment makes it easier for malicious actors to blend in.
- Open information space – Freedom of expression and press limits the capacity of governments to filter content, even when it is clearly manipulative.
- Polarization as an amplifier – In many countries, trust in institutions, traditional media and political opponents is already low. Disinformation does not need to be believed by everyone; it just needs to reinforce existing biases.
Deepfakes amplify a trend already visible with “classic” disinformation:
- Erosion of shared facts – Citizens no longer agree on basic events or statistics; each camp has its own “truth ecosystem”.
- Strategic skepticism – When a compromising video emerges, supporters can dismiss it as “just another deepfake”, even if it is genuine.
- Paralysis of debate – If any evidence can be questioned, political debate shifts from “what should we do?” to “what actually happened?”, consuming time and trust.
This last point is crucial: deepfakes do not have to be widely used to be effective. The mere possibility that any content could be fake weakens the evidential basis of public discourse. In that sense, the “liar’s dividend” — the ability of wrongdoers to deny authentic evidence by pointing to deepfake technology — may be as dangerous as fake content itself.
Tech vs tech: the detection arms race
Technology is both the problem and part of the solution. Major platforms, research labs and startups are working on detection tools and provenance systems.
Three main approaches are emerging:
- Content-based detection – Algorithms analyze videos or audio to detect artifacts: inconsistent lighting, unstable facial micro-expressions, odd breathing patterns, spectral anomalies in sound.
- Source and metadata analysis – Systems track where a piece of content first appeared, how it spread, and what metadata (device, location, timestamps) accompany it.
- Provenance and watermarking – Tools such as digital signatures, cryptographic watermarks or standards like C2PA (Coalition for Content Provenance and Authenticity) attach verifiable information to images, videos or audio at the time of capture or editing.
However, the detection battle is inherently asymmetric:
- Attackers adapt quickly – As detection models become better, generative models are fine-tuned to evade them, removing typical artifacts.
- False positives and negatives matter – A small rate of errors can be exploited: innocent content labeled as fake undermines trust; sophisticated fakes passing through can shift narratives.
- Coverage is partial – Many detection efforts are focused on major languages and platforms, while attacks can easily move to smaller networks, encrypted apps or under-resourced languages.
In business terms, this looks like a never-ending cost center. But democracies cannot opt out. Investment in detection and provenance is now part of critical infrastructure — like cybersecurity. The objective is not zero deepfakes (unrealistic), but making their use costly, risky and less effective.
Regulating platforms and AI models: where is the line?
Regulation is catching up, unevenly.
In the European Union, the Digital Services Act (DSA) imposes obligations on very large platforms to assess and mitigate systemic risks, including disinformation. The AI Act, meanwhile, introduces specific transparency requirements for AI-generated content: users must be informed when they interact with deepfakes, with some exceptions such as satire or legitimate artistic use.
In the United States, regulatory initiatives are more fragmented, with state-level laws on deepfakes in political ads (for instance, requiring disclaimers) and sectoral guidance by agencies like the Federal Election Commission. Several countries in Asia are experimenting with pre-election content moderation rules and obligations for platforms to remove demonstrably false information that could affect voting.
Key regulatory levers under discussion include:
- Mandatory labeling of AI-generated political content, especially in advertising.
- Transparency obligations for platforms on how they recommend political content and moderate disinformation.
- Liability frameworks that incentivize rapid action against coordinated manipulation campaigns, without imposing generalized monitoring.
- Requirements for AI model providers to prevent or limit the generation of certain harmful deepfakes (for example, impersonation of public officials).
The tension is obvious: how to protect democratic processes without drifting into political censorship? Democracies cannot simply “ban” disinformation without clear and narrow definitions, due process and independent oversight. Overreach could backfire, feeding narratives of bias and reinforcing mistrust.
For businesses, especially platforms and AI providers, this evolving regulatory landscape is not just a compliance checklist. It is a strategic risk. Failing to anticipate political expectations can lead to sudden legal constraints, public boycotts, or loss of access to key markets.
Rebuilding democratic resilience: beyond fact-checking
Technical detection and regulation are necessary but insufficient. A resilient democracy cannot rely solely on platforms’ algorithms or government decrees. It needs citizens capable of navigating a polluted information environment.
Three axes of resilience stand out:
- Media and digital literacy – Teaching citizens, from school to continuing education, how algorithms work, how to verify a source, and how deepfakes are made. Several Nordic countries, often cited as resilient to disinformation, integrated this approach into their curricula years ago.
- Information infrastructure – Supporting independent, well-funded, locally rooted media capable of quickly checking and contextualizing viral content. When credible sources are weak, rumors fill the gap.
- Civic habits and norms – Encouraging political actors to adopt minimal standards: not sharing unverified viral content, correcting false information even when it benefits their side, and accepting independent fact-checking as part of the game.
These elements may sound idealistic, but they can be operationalized. For example:
- Governments can condition public funding or access to official press pools on adherence to transparent editorial and correction policies.
- Electoral commissions can create rapid-response teams working with independent media and civil society to debunk major election-related deepfakes within hours, in multiple languages.
- Schools, universities and employers can integrate short, practical modules on information hygiene into curricula and onboarding processes.
Resilience is not about creating “perfectly informed citizens” — an impossible goal. It is about reducing the proportion of people who are both highly exposed and highly vulnerable to manipulation, especially during sensitive political moments.
What businesses and institutions can do now
For companies and public institutions, disinformation and deepfakes are not only a threat to “democracy in general”. They represent specific operational and reputational risks: fake statements by CEOs affecting stock prices, synthetic audio “from the CFO” authorizing fraudulent transfers, or deepfake scandals targeting brand ambassadors.
Several concrete measures can be taken today:
- Establish a synthetic media policy – Define what types of AI-generated content your organization will or will not use (marketing, internal training, etc.), under which conditions and with which disclosures.
- Set up monitoring and rapid response – Use social listening tools and partnerships with specialized firms to detect early signs of deepfake or disinformation attacks involving your brand or leadership.
- Secure official channels – Make it very clear, on your website and official accounts, which channels are authoritative for announcements, crisis communication and investor information. This helps citizens and media quickly cross-check dubious content.
- Train leadership and key teams – Communication teams, legal departments, HR and top management should know the basics: what is technically possible, how to respond to suspected deepfakes, and when to involve law enforcement or regulators.
- Participate in standards and coalitions – Joining initiatives such as content provenance standards consortia or industry working groups can reduce fragmentation and increase the impact of protective measures.
For public institutions — electoral commissions, ministries, local authorities — the stakes are even higher. They can:
- Run pre-election awareness campaigns explicitly about deepfakes, with examples and simple verification tips.
- Publish clear communication protocols (what channels they use, how they verify major announcements) so that citizens know where to check suspicious messages.
- Coordinate with platforms in advance of elections to establish fast lanes for flagging and investigating high-risk content.
Interestingly, the same tools that threaten democratic debate can also be used to protect it. AI can help identify coordinated behavior, map influence networks, or generate counter-narratives tailored to specific communities. The ethical line is thin, but ignoring these capabilities while malicious actors exploit them would be a strategic mistake.
Can democracies survive this era?
Survival is not really the right metric. Democracies will not suddenly disappear because of disinformation and deepfakes. The more realistic risk is gradual degradation: elections where significant portions of the electorate vote based on fabricated events, public debate dominated by scandal-of-the-day videos, and institutions constantly on the defensive, trying to catch up with viral falsehoods.
The key question is: can democracies adapt their institutions, regulations and civic culture fast enough to keep the informational field “good enough” for meaningful choice?
Several signals are cautiously encouraging:
- Regulation is converging, at least on basic transparency and platform obligations.
- Technical communities across academia, industry and civil society are collaborating on shared standards for provenance and labeling.
- Public awareness of deepfakes has grown; surveys show many citizens now know that “perfect videos” can be artificially generated.
But adaptation will not be automatic. It requires sustained investment, political courage and a shift in mindset:
- From reacting to each scandal to building long-term resilience.
- From outsourcing trust to platforms to rebuilding direct, verifiable channels between institutions, media and citizens.
- From seeing disinformation as a marginal issue to recognizing it as a structural factor in economic, social and geopolitical stability.
Democracies have survived wars, financial crises and technological revolutions. They can navigate the era of disinformation and deepfakes — but only if they treat information integrity not as a side topic for communication teams, but as a core component of their security, competitiveness and legitimacy.
For policymakers, business leaders and citizens, the practical implication is the same: now is the time to update processes, tools and habits. In a world where any image, voice or video can be faked, trust will depend less on what we see once, and more on how consistently, over time, people and institutions prove they deserve to be believed.
