Deepfake video featuring Jordan Peele and Barack Obama

Sex, coups, and the liar’s dividend: what are deepfakes doing to us?

Image credit: Getty

Deepfakes are racing towards hyper-realism, forcing us to face a future in which the fake imitates reality and reality is dismissed as fake. Their damage is already resonating beyond the bullying of public figures.

Since Sir Arthur Conan Doyle was diddled by the Cottingley Fairies and Stalin scrubbed his frenemies from photographs, we should have known that media can be manipulated. For the past century, photos, video, and audio have been sliced up and mashed together for the sake of art, satire, and deception.

The difference is that now the creation of synthetic media – once so painstaking – can be automated. Within the past two years the internet has become flooded with deepfake videos: videos manipulated by deep neural networks to replace a person with the likeness of another.

Deepfakes have their origins in autoencoders: a type of artificial neural network (ANN) taught to represent (encode) an input as a set of meaningful features from which it can also reconstruct (decode) something similar to the input. For photographs and videos of faces, for instance, an autoencoder creates an abstract representation of the face called a ‘latent face’ from which it can reconstruct the original.

Faces can be swapped by giving ANNs the same encoder, such that multiple faces are encoded with a common set of features. Passing a latent face generated from person A to a decoder for person B forces the decoder to reconstruct the video of person B using the face of person A.

ANN-based autoencoders were soon replaced with encoders based on generative adversarial networks (GANs). GANs improve the decoding process via an adversarial relationship with a discriminator. A decoder creates new images from latent faces, while a discriminator guesses whether the images are of real or fantasy people, allowing for the generation of progressively more convincing creations.

This adversarial relationship at the heart of GANs is a microcosm of the battle brewing over deepfakes online, in which detectors train algorithms to find synthetic videos and flag them up for removal while creators rework their neural networks to improve realism and evade detection. In the early days of deepfakes, for instance, a peculiar kind of blinking was a common giveaway that a video was a deepfake, but creators quickly caught wind that this was being used by detection algorithms and added more realistic blinks. This back-and-forth forces detectors to battle constantly for dominance while creators inch closer to realism.

The arrival of deceptively-manipulated video in the mainstream – such as when a crudely-edited video (a shallowfake) of US House Speaker Nancy Pelosi appearing drunk went viral – has been followed with the consensus that deepfakes need stamping out of civilised online spaces via detection and removal, just like child porn and Nazi propaganda. Social media platforms have taken some action, with Reddit banning its horny r/deepfakes subreddit and Facebook announcing it will try to detect and remove malicious deepfakes. Despite these encouraging noises, however, there is no end in sight to the arms race.

“There is definitely more in the way of resources and research going into the generation side of things, and at the moment we definitely see a lack of balance,” said Henry Ajdar, head of threat intelligence at Deeptrace, the first company to bring a deepfake detector to market. “One thing we find quite frustrating at times and we hope will change is that companies who are developing new forms of AI-generated synthetic media aren’t necessarily providing people building tools to detect that media with privileged access to data that we could use to train our models. I think for the time being that adversarial dynamic is here to stay.”

Some hope the arms race could be ended by approaching the problem from the other direction: authenticating legitimate videos. The main player in this space is Amber, which has created a blockchain-based video authentication system. The Amber system generates hashes from a video based on encoded data, which are stored on the Ethereum blockchain with associated timestamps. Comparing these hashes to those generated from another version of the video (such as a short clip from hours-long police bodycam footage) confirms whether it is identical or if it has been manipulated.

“We are finding sequences of the data that are invariant to any kind of trimming, which is the most common type of editing,” said Rod Hodgson, head of engineering at Amber. “You’d be able to correlate that with the original and if they match over a specific period of time, we can say this is an accurate representation of the footage.”

Although authentication would require a whole new infrastructure, with everyone on board from camera manufacturers to internet platforms, Amber has been engaged in positive discussions with various stakeholders. Hodgson and his colleagues envision video authentication being as easy to use as HTTPS is in a browser: for instance, with a green tick appearing on authenticated footage. Amber also hopes that authenticating at source will establish trust; people won’t need to place faith in law enforcement to handle their body cam video honestly or in Amazon engineers to secure AWS servers storing their security camera footage.

While authentication protects against false negatives, there remains the possibility – even with allowances, such as to permit facial blurring – that it could produce false positives for honestly edited footage. In the immediate future, neither detection nor authentication can offer an absolutely watertight way to flag up deepfakes as they multiply online.

Research on GANs has exploded since their conception in 2014, including many open-source publications. This has democratised media manipulation, allowing non-experts able to create basic deepfakes almost as easily as applying an Instagram filter and more than doubling the number of deepfakes circulating in 2019 alone (the vast majority of which place Hollywood actresses into porn).

Thankfully, for now most amateur deepfakes are reassuringly rubbish.

“To make the perfect deepfake like the ones you see go viral on YouTube takes hours and hours of work […] running lots of different iterations, and most of them have post-production work done to make them look more realistic,” said Ajdar. “[For] 95 per cent of the deepfakes I see on a daily basis it’s very easy to see they’re deepfakes.”

All sorts of synthetic media can be weaponised by satirists, artists, and bullies, of course. Ajdar explains that Indian politicians are almost by tradition smeared with manually-edited videos and photographs depicting them in gay porn; evidence suggests that these trolls are upping their game with deepfakes. In Iraq, deepfake porn was circulated of a female political candidate by a man apoplectic at the concept of women in politics, and in Brazil, a Senator was attacked with a deepfake video depicting him in an orgy.

“Deepfakes overwhelmingly are a gendered form of digital violence against women, but that’s not to say that men cannot similarly be harmed,” Ajdar said. “It doesn’t have to be perfectly realistic to do a lot of damage, and that includes politicians and private individuals being [inserted into porn].”

Ajdar warns that it cannot be taken for granted that we will always be able to recognise synthetic media for what it is. The threshold for people being fooled by deepfakes could be “worryingly low”, particularly if people with low media literacy are microtargeted with this material on social media. Much AI-synthesised media is already difficult for humans to discern. Human faces generated by Nvidia’s StyleGAN tool and ANN-generated voices created by start-up Dessa are almost impossible to identify as fake. Experts expressed concern to E&T that this synthetic media stripped of context and shared on closed messaging platforms could under some circumstances be extremely difficult to debunk, even for professional fact-checkers. Media with small but crucial alterations or media taken out of context could have similarly grave impacts.

“What worries me more than fully faked videos is small alternations to a video like changing the lapel on a military uniform to make it from one country to another,” said Sam Dubberley, a University of Essex researcher and adviser to Amnesty International’s Evidence Lab. “It’s easy enough to [debunk] if it’s Trump or Putin but if it’s a Rohingya village chief and there’s WhatsApp audio saying “All the Rohingya must rise up and burn down the police state” that’s going to be impossible to prove where it came from.”

Deepfakes have arrived at moment in which – for many people – the truth can be whatever they want it to be. The mere existence of deepfakes aggravates this atmosphere of tribalism and distrust in which any evidence can be dismissed as fake: this problem has been described by academics as the “liar’s dividend”.

“The worry I have is that deepfakes are a way of creating chaos in the current disinformation climate […] but also they’ll create some sort of plausible deniability and that’s what I see as being the major aim,” said Professor Lilian Edwards, an internet law and policy expert based at Newcastle University. “It’s a chaotic aim.”

Edwards cites Trump’s claim that his pussy-grabbing audio tape was faked as an example of politicians weaponising the plausible deniability legitimised by synthetic media. It is likely that those who believe Trump’s denial were already prepared to accept as fact whatever confirms their tribalistic beliefs, and it is easy to dismiss these people as a long-lost cause who did not need more excuses to choose their own facts. However, there is evidence that the liar’s dividend specifically associated with deepfakes (and the misconception that creating realistic deepfakes is trivial) is already a potent threat in parts of the world.

In Gabon in late 2018, a video of President Ali Bongo (who had temporarily stopped making public appearances due to ill health) was shared, and its unusual appearance led an opposition politician to joke and then seriously argue that the video was a deepfake. Days later, members of Gabon’s military attempted a coup, citing the video’s appearance as evidence that affairs were not as they should be. In Malaysia last year, a gay sex tape allegedly featuring the Minister of Economic Affairs and a rival minister’s aide circulated online. While the aide swore that the video was real, the minister and Prime Minister dismissed it as a deepfake, allowing the minister to get away without the serious legal consequences he may otherwise expect in the socially conservative country.  

Subsequent analysis seemed to conclude that neither of the two videos were deepfakes, but damage was done regardless: “Awareness of deepfakes alone is destabilising political processes by undermining the perceived objectivity of videos featuring politicians and public figures,” Deeptrace wrote in its 2019 report. Western democracies like the UK, many of which have seen their institutions weakened in the past few years, cannot take for granted that they are invulnerable to destabilisation.

The erosion of trust in video caused by deepfakes is likely to resonate beyond the sphere of politics, as the impact of fake news is felt across healthcare, science, and other areas. Video evidence fuels social movements like Black Lives Matter and the Hong Kong Pro-Democracy Protests, and is increasingly significant in courts of law, with the European Human Rights Advocacy Centre submitting video in its litigation regarding the 2014 annexation of Crimea.

According to Shamir Allibhai, CEO of Amber, the company was motivated by concerns about what plausible deniability regarding video evidence could mean for social justice movements. He points out that as the IoT grows, more video will be captured from drones, police body cams, autonomous vehicles and millions of other connected devices; if it can be trusted, this video will be an ever-more crucial tool in battles for justice from courtrooms to campaigns.

“Video evidence in cases [like the beating of Rodney King, a black man, by Los Angeles police officers in 1991] was really important for creating positive change. In a world of deepfakes where this video looks indistinguishable from any other video, just the existence of deepfakes will allow people to write off videos that don’t confirm their worldview,” Allibhai said. “If in 10 years there’s a similar situation like Black Lives Matter, could it even get started in a world where you can dismiss bystander video evidence as potentially fake?”

Video evidence is also utilised by human rights advocates to heighten public awareness of atrocities; Dubberley explains that Amnesty likes to use it along with other forms of evidence because it conveys the human fear and suffering that can be lost in a report: “You can write a 95 page report, you can speak to 100 people and you can get their eyewitness testimony but actually it’s when you’ve got photo and video evidence that you’ve got people sitting up and noticing,” he said.

While Amnesty and other organisations never rely on video evidence alone, Dubberley says that there is a concern that as deepfakes proliferate, they could become increasingly cautious about using it as evidence: “I guess with deepfakes the challenge is do we pull away from commenting on something we otherwise might have commented”. While Dubberley does not recollect any cases in which deepfakes have been used to sow deliberate confusion about atrocities, this happens “time and time again” with shallowfakes and media presented in a misleading context, suggesting that it may only be a matter of time until deepfakes also start being misused in this way.

There is no straightforward way to fix the tangle of problems caused by deepfakes and other forms of synthetic media. In its 2019 snapshot paper on deepfakes, the UK Centre for Data Ethics and Innovation calls for a combination of legislation, investment in screening technology, media literacy efforts, and bolstering institutions like a free and open press.

Experts seem to agree that a multi-pronged approach is necessary: “The solution should not be one solution but a mix of different policies from different actors involved,” said Dr Elena Abrusci, an expert on technology and human rights at the University of Essex. “There should be more transparency and efforts from the tech companies. We believe [media literacy] is important but we also don’t think it’s the solution to all problems, we don’t want to put the burden on the individual so it’s just their responsibility to detect deepfakes.”

Edwards says that deepfakes absolutely cannot be fixed with a single technological or legal silver bullet; they are part of “a problem that is pervading our entire political ecosystem right now”. This disinformation crisis has already led to a patchwork of laws – as well as pledges from platforms, soul searching from experts, and demands from campaigners – some of which could be used to fight deepfakes alongside existing copyright and defamation laws. However, nobody believes that the issues with deepfakes will be straightforward to resolve, particularly when debates around freedom of expression and responsibility for enforcement remain contentious and unresolved.

Some movement has been made against deepfakes in the US, with private actors like Facebook and the Government of California providing some funding for deepfake detection, and the US Deepfake Reporting Act set to provide data to help researchers, but across most of the world there is minimal commitment to taking on malicious deepfakes. The danger is that the fightback won’t start in earnest until the damage has been done.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles