Easy-to-make deepfakes have already emerged in campaigns around the world. An audio deepfake impersonating U.S. President Joe Biden prompted much worry from politicians in a year with a record number of elections.
Europeans will cast their ballots for a new bloc-wide Parliament in June, while Americans will head to the polls to elect a new president in November.
Faced with growing pressure, some companies, including ChatGPT maker OpenAI, said they would start marking fake images. Meta said Tuesday it would start labeling AI-generated images on Instagram, Facebook and Threads in “the coming months.”
Many in Europe hope the DSA will ensure major social media companies are held more accountable for how they protect elections from disinformation.
Very large online platforms must take measures to limit broadly defined systemic risks, including potential negative effects on electoral processes. They also have to ensure their platforms aren’t being exploited by coordinated manipulation campaigns, such as bot factories amplifying disinformation.
Breton also said that the Commission would issue guidelines by March for very large online platforms on measures to take to counter electoral disinformation. “We will specifically describe what the platform will have to do to make sure that they are in conformity with the DSA when it comes to [the] integrity of the elections,” he said.
He added that platforms would have to set up a “rapid reaction mechanism for any kind of incident,” and that a simulation exercise would be conducted to ensure such systems were working.