Skip to main content
Article

The European Parliament Elections Will Serve as a Litmus Test for the Fight Against Disinformation

A lot has changed since the last European elections. When EU citizens vote for the European Parliament (EP) from June 6 to 9 it will be for the first time since the widespread release of generative AI tools have made it easier to create and share deceptive election content. The EP election is one among many in a super election year that is a crucial moment for democracies worldwide, including the United States, to assess the threat of disinformation to elections and the effectiveness of regulatory efforts.

On the EU’s eastern border, Slovakia has been a prime example of election related disinformation. Likely deep fake audio of the western aligned candidate Michal Šimečka (leader of the liberal Progressive Slovakia and member of the Renew political group) spread on social media the night before the domestic parliamentary elections in 2023. In the audio, Šimečka appeared to be discussing how to manipulate the election by buying votes from the marginalized Roma minority. Similarly, the presidential elections in April were also plagued with false and misleading information, including that the more EU and US friendly candidate and former Foreign Minister Ivan Korčok planned to send troops to Ukraine. In both cases, the candidate most directly affected by disinformation lost even if disinformation cannot be proven to be the primary cause. In Slovakia as well as other parts of the EU and US, experts have traced disinformation to Russian controlled propaganda sites and social media pages. Far-right parties frequently then spread this propaganda domestically. Will the EP elections similarly suffer from such attempts of spreading disinformation? 

While there are reports of AI generated election disinformation, there is disagreement among experts about how much it will affect election outcomes. Generative AI has reduced the barriers to create and spread disinformation. Some argue, however, that the impacts of AI have been overblown since manipulated content has already led to skepticism. Regardless, convincing deep fakes and the large quantity of disinformation are likely to make voters less trusting of any election-related information. Democracies should be cautious in protecting against the risks this poses to their institutions.

Risks to Democratic Processes

Disinformation is as much a challenge for online platforms where disinformation spreads as it is for  democratic governments. But it is not easy to address. Firstly, large online platforms prioritize user engagement and wide reach for content. Disinformation, especially if it plays on users’ emotions and concerns, can be highly engaging and spreads quickly. Determining truthfulness, intent to mislead, and AI manipulation are all hard tasks. Secondly, not all election disinformation is equal. Sometimes it can lead to direct harm and interfere with the electorate’s vote, such as when false information is shared about voting processes. A harder challenge is addressing active mischaracterizations of candidates and their platforms. Online platforms have policies that prohibit some forms of election related disinformation, but they do not have a good and transparent record of enforcing those standards.

Campaigns, journalists, and civil society play an essential role in fact checking information. To do so, the online platforms must provide timely information about how information spreads so the response can be quick. Yet, Meta has announced that after the EP election but before the US presidential election in November it will replace the transparency tool CrowdTangle with a platform experts are worried will not be adequate to understand the role that disinformation will play in elections this year. 

There are also real risks to the rights of freedom of expression and privacy in attempting to counter disinformation. The EU upholds fundamental rights and member states cannot impose a general monitoring requirement on platforms for illegal or harmful content. Meta’s Oversight Board, the semi-judicial body that reviews content decisions and recommends changes to policies, put pressure on Meta to label AI content while leaving it online so as to not interfere with freedom of speech. Both governments and platforms should ensure they are protecting human rights when creating and implementing systems to fight election disinformation.

The EU's Tech Regulations and Voluntary Frameworks

The EU aims to protect fundamental rights while safeguarding European democracy in some of its latest tech regulations. One example is the recent Digital Services Act (DSA). Under the DSA, the most widely used platforms in the EU are obliged to remove illegal content. While the law does not directly prohibit disinformation, the European Commission is using it to pressure online platforms to tackle disinformation. On April 30, the European Commission opened an investigation into Facebook and Instagram for potential breaches of the DSA. The inquiry addresses the dissemination of disinformation, political advertising, notice and action procedures, and the closing of CrowdTangle. 

To mitigate election integrity risks, the Commission created the DSA transparency database and DSA election guidelines for businesses like Google, Meta, Microsoft, Snap, TikTok, and X. Those guidelines emphasize transparency, for example, on political advertising. They call for tech companies to establish dedicated teams that will work with governments across Europe, especially on recognition of “deep fakes”. While the guidelines push for platforms to promote accurate election information many platforms, especially Meta, have shifted away from featuring credible news platforms. This limits voter’s exposure and engagement with a diverse range of trustworthy election sources.

A number of additional measures beyond the DSA can help protect against election disinformation:

  • The EU’s recently finalized AI Act could become another important measure to limit the future risks that AI poses to democratic processes. While not yet in effect, it will require transparency and fundamental rights impact assessments for high-risk AI systems, for example, for influencing the outcome of an election.
  • The Regulation on the transparency and targeting of political advertising, with further requirements for the repositories to rein in disinformation in elections, though these rules will kick after EP elections this June. 
  • The Code of Practice on Disinformation from 2018, with which the online industry agreed, voluntarily, to self-regulate against disinformation. In 2022, this code was strengthened. Under the Article 35 of DSA, the Code of Practice on Disinformation could evolve into a Code of Conduct with mandatory audits as a crucial compliance tool.
  • The AI Election Accord, a set of voluntary commitments from prominent technology firms that focuses on authenticating, watermarking, and addressing the deceptive use of AI. The Accord is another positive step as it includes some of the companies that developed generative AI tools, though those companies must show that they take threats to democracy seriously. 

Voter Disengagement: A Critical Consequence of Election Disinformation

The EU, the United States, and the many other democracies facing elections this year have plenty to learn from each other to address election disinformation. They share the attention from sophisticated propaganda teams, especially from Russia and China, dedicated to spreading election disinformation. The impact of the EU’s regulatory efforts on online platforms’ capabilities to better address election disinformation could have a spillover effect. As with other EU tech regulations, like the General Data Protection Regulation, other countries will also benefit if these companies improve their ways of moderating content on their platforms. 

However, there is not only a risk of voters being misinformed but, overwhelmed with contradictory information, also deciding to stay home. Narratives that discourage voting are already present and will continue. Democracies must not only work together and learn from each other, but also support a healthy civil society that can challenge election disinformation and promote the exercise of voting rights.


Global Europe Program

The Global Europe Program is focused on Europe’s capabilities, and how it engages on critical global issues.  We investigate European approaches to critical global issues. We examine Europe’s relations with Russia and Eurasia, China and the Indo-Pacific, the Middle East and Africa. Our initiatives include “Ukraine in Europe” – an examination of what it will take to make Ukraine’s European future a reality.  But we also examine the role of NATO, the European Union and the OSCE, Europe’s energy security, transatlantic trade disputes, and challenges to democracy. The Global Europe Program’s staff, scholars-in-residence, and Global Fellows participate in seminars, policy study groups, and international conferences to provide analytical recommendations to policy makers and the media.  Read more