Artificial intelligence (AI) is adding to the threat of election disinformation worldwide.
The technology makes it easy for anyone with a smartphone and an imagination to create fake – but convincing – content aimed at fooling voters.
Just a few years ago, fake photos, videos or audio required teams of people with time, skill and money. Now, free and low-cost generative artificial intelligence services from companies like Google and OpenAI permit people to create high-quality “deepfakes” with just a simple text entry.
Expanding threats
A wave of AI deepfakes tied to elections in Europe and Asia has already appeared on social media for months. It served as a warning for more than 50 countries having elections this year.
Some recent examples of AI deepfakes include:
— A video of Moldova’s pro-Western president throwing her support behind a political party friendly to Russia.
— Audio of Slovakia’s liberal party leader discussing changing ballots and raising the price of beer.
— A video of an opposition lawmaker in Bangladesh — a conservative Muslim majority nation — wearing a bikini.
The question is no longer whether AI deepfakes could affect elections, but how influential they will be, said Henry Ajder, who runs a business advisory company called Latent Space Advisory in Britain.
“You don’t need to look far to see some people ... being clearly confused as to whether something is real or not,” Ajder said.
Challenge to democracy
As the U.S. presidential race comes closer, Christopher Wray, the director of the Federal Bureau of Investigation issued a warning about the growing threat of generative AI. He said the technology makes it easy for foreign groups to attempt to have a bad influence on elections.
With AI deepfakes, a candidate’s image can be made much worse or much better. Voters can be moved toward or away from candidates — or even to avoid the polls altogether. But perhaps the greatest threat to democracy, experts say, is that the growth of AI deepfakes could hurt the public’s trust in what they see and hear.
The complexity of the technology makes it hard to find out who is behind AI deepfakes. Experts say governments and companies are not yet capable of stopping the problem.
The world’s biggest tech companies recently — and voluntarily — signed an agreement to prevent AI tools from disrupting elections. For example, the company that owns Instagram and Facebook has said it will start labeling deepfakes that appear on its services.
But deepfakes are harder to limit on apps like Telegram, which did not sign the voluntary agreement. Telegram uses encrypted messages that can be difficult to uncover.
Concerns about efforts to limit AI
Some experts worry that efforts to limit AI deepfakes could lead to unplanned results.
Tim Harper is an expert at the Center for Democracy and Technology in Washington, DC. He said sometimes well-meaning governments or companies might crush the “very thin” line between political commentary and an “illegitimate attempt to smear a candidate.”
Major generative AI services have rules to limit political disinformation. But experts say it is too easy to defeat the restrictions or use other services.
AI software is not the only threat.
Candidates themselves could try to fool voters by claiming events that show them in bad situations were manufactured by AI.
Lisa Reppell is a researcher at the International Foundation for Electoral Systems in Arlington, Virginia.
She said, “A world in which everything is suspect — and so everyone gets to choose what they believe — is also a world that’s really challenging for…democracy.”
I’m John Russell.