Site icon

Can democracies survive the era of disinformation and deepfakes shaping public opinion and elections

Can democracies survive the era of disinformation and deepfakes shaping public opinion and elections

Can democracies survive the era of disinformation and deepfakes shaping public opinion and elections

Democracies have always lived with propaganda, rumors and biased media. What is new is the speed, the scale and now the realism of manipulated content. When anyone, with a modest budget and basic skills, can create a convincing video of a president declaring war or a candidate confessing to a crime, the core democratic mechanism — informed consent through the ballot box — is under direct pressure.

The question is no longer whether disinformation and deepfakes will affect public opinion and elections. They already do. The real issue is whether democracies can adapt quickly enough to survive this new information environment without sacrificing their core freedoms.

From spin to synthetic reality: what has really changed?

Political communication has always involved framing and selective use of facts. But three shifts create a new landscape:

Deepfakes — hyper-realistic synthetic audio or video based on AI models — crystallize these three trends. They are not just another form of “fake news”; they attack the evidence we rely on to decide what is real. When “seeing is no longer believing”, the entire epistemic infrastructure of democracy is at risk.

How deepfakes are already testing democracies

Several recent episodes illustrate both the power and the current limits of deepfake-based disinformation.

In 2022, during Russia’s invasion of Ukraine, a deepfake video of President Volodymyr Zelensky calling on Ukrainian troops to lay down their arms circulated on social networks. The video was clumsy by current standards and quickly debunked, but it was a proof of concept: in wartime, a few minutes of confusion can matter.

In early 2024, ahead of various national elections, security teams from major platforms flagged coordinated campaigns using AI-generated audio to imitate candidates’ voices in robocalls or WhatsApp messages, urging voters to boycott or change their vote. In several cases, these operations targeted diaspora communities or specific language groups where traditional media scrutiny was low.

The pattern is emerging:

So far, the most widely reported cases have been detected quickly and did not overturn elections. But this may be a false sense of security. Attackers are learning, tools are improving, and the next wave will be more subtle, multilingual and tailored to local political culture.

Why democracies are structurally vulnerable

Authoritarian regimes also face disinformation and deepfakes, but they have tools most democracies (fortunately) do not: strict control over media, systematic censorship, and often one dominant narrative enforced by law.

Democracies, by design, combine three vulnerabilities:

Deepfakes amplify a trend already visible with “classic” disinformation:

This last point is crucial: deepfakes do not have to be widely used to be effective. The mere possibility that any content could be fake weakens the evidential basis of public discourse. In that sense, the “liar’s dividend” — the ability of wrongdoers to deny authentic evidence by pointing to deepfake technology — may be as dangerous as fake content itself.

Tech vs tech: the detection arms race

Technology is both the problem and part of the solution. Major platforms, research labs and startups are working on detection tools and provenance systems.

Three main approaches are emerging:

However, the detection battle is inherently asymmetric:

In business terms, this looks like a never-ending cost center. But democracies cannot opt out. Investment in detection and provenance is now part of critical infrastructure — like cybersecurity. The objective is not zero deepfakes (unrealistic), but making their use costly, risky and less effective.

Regulating platforms and AI models: where is the line?

Regulation is catching up, unevenly.

In the European Union, the Digital Services Act (DSA) imposes obligations on very large platforms to assess and mitigate systemic risks, including disinformation. The AI Act, meanwhile, introduces specific transparency requirements for AI-generated content: users must be informed when they interact with deepfakes, with some exceptions such as satire or legitimate artistic use.

In the United States, regulatory initiatives are more fragmented, with state-level laws on deepfakes in political ads (for instance, requiring disclaimers) and sectoral guidance by agencies like the Federal Election Commission. Several countries in Asia are experimenting with pre-election content moderation rules and obligations for platforms to remove demonstrably false information that could affect voting.

Key regulatory levers under discussion include:

The tension is obvious: how to protect democratic processes without drifting into political censorship? Democracies cannot simply “ban” disinformation without clear and narrow definitions, due process and independent oversight. Overreach could backfire, feeding narratives of bias and reinforcing mistrust.

For businesses, especially platforms and AI providers, this evolving regulatory landscape is not just a compliance checklist. It is a strategic risk. Failing to anticipate political expectations can lead to sudden legal constraints, public boycotts, or loss of access to key markets.

Rebuilding democratic resilience: beyond fact-checking

Technical detection and regulation are necessary but insufficient. A resilient democracy cannot rely solely on platforms’ algorithms or government decrees. It needs citizens capable of navigating a polluted information environment.

Three axes of resilience stand out:

These elements may sound idealistic, but they can be operationalized. For example:

Resilience is not about creating “perfectly informed citizens” — an impossible goal. It is about reducing the proportion of people who are both highly exposed and highly vulnerable to manipulation, especially during sensitive political moments.

What businesses and institutions can do now

For companies and public institutions, disinformation and deepfakes are not only a threat to “democracy in general”. They represent specific operational and reputational risks: fake statements by CEOs affecting stock prices, synthetic audio “from the CFO” authorizing fraudulent transfers, or deepfake scandals targeting brand ambassadors.

Several concrete measures can be taken today:

For public institutions — electoral commissions, ministries, local authorities — the stakes are even higher. They can:

Interestingly, the same tools that threaten democratic debate can also be used to protect it. AI can help identify coordinated behavior, map influence networks, or generate counter-narratives tailored to specific communities. The ethical line is thin, but ignoring these capabilities while malicious actors exploit them would be a strategic mistake.

Can democracies survive this era?

Survival is not really the right metric. Democracies will not suddenly disappear because of disinformation and deepfakes. The more realistic risk is gradual degradation: elections where significant portions of the electorate vote based on fabricated events, public debate dominated by scandal-of-the-day videos, and institutions constantly on the defensive, trying to catch up with viral falsehoods.

The key question is: can democracies adapt their institutions, regulations and civic culture fast enough to keep the informational field “good enough” for meaningful choice?

Several signals are cautiously encouraging:

But adaptation will not be automatic. It requires sustained investment, political courage and a shift in mindset:

Democracies have survived wars, financial crises and technological revolutions. They can navigate the era of disinformation and deepfakes — but only if they treat information integrity not as a side topic for communication teams, but as a core component of their security, competitiveness and legitimacy.

For policymakers, business leaders and citizens, the practical implication is the same: now is the time to update processes, tools and habits. In a world where any image, voice or video can be faked, trust will depend less on what we see once, and more on how consistently, over time, people and institutions prove they deserve to be believed.

Quitter la version mobile