New AI video tools increase worries of deepfakes ahead of elections
Experts worry malicious actors could use AI tools to create deepfakes, confusing and misleading voters in an election year.
The video that OpenAI released to unveil its new text-to-video tool, Sora, has to be seen to be believed. Photorealistic scenes of charging woolly mammoths in clouds of snow, a couple walking through falling cherry blossoms and aerial footage of the California gold rush.
The demonstration reportedly prompted movie producer Tyler Perry to pause an $800m studio investment. Tools like Sora promise to translate a user’s vision into realistic moving images with a simple text prompt, the logic goes, making studios obsolete.
Others worry that artificial intelligence (AI) like this could be exploited by those with darker imaginations. Malicious actors could use these services to create highly realistic deepfakes, confusing or misleading voters during an election or simply causing chaos by seeding divisive rumours.
Regulators, law enforcement and social media platforms are already struggling to cope with the rise of AI-generated disinformation, including faked audio of political leaders that have allegedly helped to skew an election in Slovakia and discourage people from voting in the New Hampshire primaries.
Politicians and civil society worry that as these tools become more and more sophisticated, it will be harder than ever for everyday people to tell what is real and what is fake.
But experts in political disinformation and AI say that the increasing use of AI products is only a new facet of an old problem. These tools just add to an already well-stocked arsenal of technologies and techniques used to manipulate and mislead.
Dealing with the challenge of deepfakes really means addressing the unresolved questions of how to regulate the social media platforms on which they will spread and making Big Tech companies responsible when their products are left open to misuse.
“These AI image generators threaten to make the problem of election disinformation worse but we should be very conscious that it’s already a problem,” said Callum Hood, the head of research at the Center for Countering Digital Hate (CCDH), a campaign group. “We already need tougher measures from social media platforms on that existing problem.”
Several companies that offer generative AI image makers, including Midjourney, OpenAI and Microsoft, have policies that are supposed to prevent users from generating misleading pictures. However, CCDH claims these policies are not being enforced.
In a study released on March 6, the centre showed that it was still relatively straightforward to generate images that could well be dangerous in the highly partisan context of the United States elections, including faked photorealistic images of President Joe Biden in hospital or greeting migrants on the Mexican border and CCTV-style images of election tampering.
These images reflect common falsehoods in US politics. Former President Donald Trump has routinely promoted the idea that the results of the 2020 election were manipulated, a lie that helped give rise to violent protests at the Capitol building in January 2021.
“It shows [the companies] haven’t thought this through enough,” Hood said. “The big vulnerability here is in images that can be used to support a narrative of stolen election, or false claims of election fraud.”
The researchers found significant differences in how individual image makers responded to the prompts – some would not allow users to create images that were very obviously partisan. “Those differences show that it is possible to put effective safeguards in place,’ Hood said, adding that this reflects a choice on the part of the companies.
“It’s symptomatic of a broader imbalance between the profit motive and safety of AI companies,” he said. “They have every incentive to move as fast as possible with as few guardrails in place so they can push products out, push new features out and grab a bit more in venture funding or investment. They have no incentive to slow down and be safe first.”
OpenAI, Microsoft and MidJourney did not respond to requests for comment.
Little achieved
That incentive is only likely to come in the form of regulation that forces tech companies to act and penalises them if they do not. But social media disinformation experts say they feel a sense of déja vu. The conversations taking place around the regulation of AI sound eerily like those that were had years ago around the spread of disinformation on social media. Big Tech companies pledged to put in place measures to tackle the spread of dangerous falsehoods but the problem persists.
“It’s like Groundhog Day,” said William Dance, a senior research associate at Lancaster University, who has advised United Kingdom government departments and security services on disinformation. “And it tells you how little, really, we’ve achieved in the last 10-15 years.”
With potentially highly charged elections taking place in the European Union, the UK, India and the US this year, Big Tech companies have once again pledged, individually and collectively, to reduce the spread of this kind of disinformation and misinformation on their platforms.
In late February, Facebook and Instagram owner Meta announced a series of measures aimed at reducing disinformation and limiting the reach of targeted influence operations during the European Parliament elections. These include allowing fact-checking partners – independent organisations Meta allows to label content on its behalf – to label AI-generated or manipulated content.
Meta was among approximately 20 companies that signed up to a “Tech Accord”, which promises to develop tools to spot, label and potentially debunk AI-generated misinformation.
“It sounds like there’s a kind of blank template which is like: ‘We will do our utmost to protect against blank’,” Dance said. “Disinformation, hate speech, AI, whatever.”
The confluence of unregulated AI and unregulated social media worries many in civil society, particularly since several of the largest social media platforms have cut back their “trust and safety” teams responsible for overseeing their response to disinformation and misinformation, hate speech and other harmful content. X – formerly Twitter – shed nearly a third of its trust and safety staff after Elon Musk took over the platform in 2022.
“We are in a doubly concerning situation, where spreaders of election disinformation have new tools and capabilities available to them, while social media companies, in many cases, seem to be deliberately limiting themselves in terms of the capabilities they have for acting on election disinformation,” CCDH’s Hood said. “So it’s a really significant and worrying trend.”
Before election cycles begin in earnest it will be hard to predict the extent to which deepfakes will proliferate on social media. But some of the damage has already been done, researchers say. As people become more aware of the ability to create sophisticated faked footage or images, it creates a broader sense of distrust and unease. Real images or videos can be dismissed as fake – a phenomenon known as the “liar’s dividend”.
“I go online, I see something, I’m like, ‘Is this real? Is this AI generated?’ I cannot tell any more,” Kaicheng Yang, a researcher at Northeastern University who studies AI and disinformation, said. “For average users, I am just going to assume it’s going to be worse. It doesn’t matter how much genuine content is online. As long as people believe there is a lot, then we have a problem.”