Disinformation caught many people off guard during the 2016 Brexit referendum and US presidential election. Since then, a mini-industry has developed to analyse and counter it.
Yet despite that, we have entered 2024 – a year of more than 40 elections worldwide – more fearful than ever about disinformation. In many ways, the problem is more challenging than it was in 2016.
Advances in technology since then are one reason for that, in particular the development that has taken place with synthetic media, otherwise known as deepfakes. It is increasingly difficult to know whether media has been fabricated by a computer or is based on something that really happened.
We’ve yet to really understand how big an impact deepfakes could have on elections. But a number of examples point the way to how they may be used. This may be the year when lots of mistakes are made and lessons learned.
Since the disinformation propagated around the votes in 2016, researchers have produced countless books and papers, journalists have retrained as fact checking and verification experts, governments have participated in “grand committees” and centres of excellence. Additionally, libraries have become the focus of resilience building strategies and a range of new bodies has emerged to provide analysis, training, and resources.
This activity hasn’t been fruitless. We now have a more nuanced understanding of disinformation as a social, psychological, political, and technological phenomenon. Efforts to support public interest journalism and the cultivation of critical thinking through education are also promising. Most notably, major tech companies no longer pretend to be neutral platforms.
In the meantime, policymakers have rediscovered their duty to regulate technology in the public interest.
AI and synthetic media
Regulatory discussions have added urgency now that AI tools to create synthetic media – media partially or fully generated by computers – have gone mainstream. These deepfakes can be used to imitate the voice and appearance of real people. Deepfake media are impressively realistic and do not require much skill or resources.
This is the culmination of the wider digital revolution whereby successive technologies have made high-quality content production accessible to almost anyone. In contrast, regulatory structures and institutional standards for media were mostly designed in an era when only a minority of professionals had access to production.
Political deepfakes can take different forms. The recent Indonesian election saw a deepfake video “resurrecting” the late President Suharto. This was ostensibly to encourage people to vote, but it was accused of being propaganda because it produced by the political party that he led.
Perhaps a more obvious use of deepfakes is to spread lies about political candidates. For example, fake AI-generated audio released days before Slovakia’s parliamentary election in September 2023 attempted to portray the leader of Progressive Slovakia, Michal Šimečka, as having discussed with a journalist how to rig the vote.
Aside from the obvious effort to undermine a political party, it is worth noting how this deepfake, whose origin was unclear, exemplifies wider efforts to scapegoat minorities and demonise mainstream journalism.
Fortunately, in this instance, the audio was not high-quality, which made it quicker and easier for fact checkers to confirm its inauthenticity. However, the integrity of democratic elections cannot rely on the ineptidude of the fakers.
Deepfake audio technology is at a level of sophistication that makes detection difficult. Deepfake videos still struggle with certain human features, such as the representation of hands, but the technology is still young.
It is also important to note the Slovakian video was released during the final days of the election campaign. This is a prime time to launch disinformation and manipulation attacks because the targets and independent journalists have their hands full and therefore have little time to respond.
If it is also expensive, time-consuming, and difficult to investigate deep fakes, then it’s not clear how electoral commissions, political candidates, the media, or indeed the electorate should respond when potential cases arise. After all, a false accusation from a deepfake can be as troubling as the actual deepfake.
Another way deepfakes could be used to affect elections can be seen in the way they are already widely used to harass and abuse women and girls. This kind of sexual harassment fits an existing pattern of abuse that limits political participation by women.
Questioning electoral integrity
The difficulty is that it’s not yet clear exactly what impact deepfakes could have on elections. It’s very possible we could see other, similar uses of deepfakes in upcoming elections this year. And we could even see deepfakes used in ways not yet conceived of.
But it’s also worth remembering that not all disinformation is high-tech. There are other ways to attack democracy. Rumours and conspiracy theories about the integrity of the electoral process are an insidious trend. Electoral fraud is a global concern given that many countries are only democracies in name.
Clearly, social media platforms enable and drive disinformation in many ways, but it is a mistake to assume the problem begins and ends online. One way to think about the challenge of disinformation during upcoming elections is to think about the strength of the systems that are supposed to uphold democracy.
Is there an independent media system capable of providing high quality investigations in the public interest? Are there independent electoral administrators and bodies? Are there independent courts to adjudicate if necessary?
And is there sufficient commitment to democratic values over self interest
amongst politicians and political parties? This year of elections, we may well find out the answer to these questions.