Remember Back To The Future*? The car, the music, and at the center of it all, the iconic duo of Robert Downey Jr. and Tom Holland as Doc Brown and Marty McFly. Of course, you don’t remember that last bit – in fact, Tom Holland wasn’t even born when Back to the Future came out in 1985.
But you can see this alternative casting in just one of hundreds of novelty “deepfake” videos that have popped up over the internet over the last couple of years. Using deepfaking technology, these videos can transpose actors into films in which they never starred, or celebrities into interviews that never took place, with uncanny accuracy. There’s a growing worry over the implications of deepfakes. How far is this fear justified? And what can we do to address it?
What are deepfakes – and should we be worried?
Deepfakes operate by building on a technology called “generative adversarial networks” (GANs). An adversarial network consists of two neural networks – one called the Generator, and the other called the Discriminator.
Firstly, the Generator takes as an input a photo of a person, and then tries to generate a second picture, which is similar but different to the provided photo. This photo is then fed to the Discriminator network, which attempts to understand whether the input provided is genuine or fake – this is where the “adversarial” name comes from: each network is trying to fool the other.
The more the two networks train, the better they become at generating fake images that look like the real thing. Eventually, an image is created that is indistinguishable (at least to the discriminating network) from the actual photographs of that person. A deepfake is born. That’s how you can get a video of Facebook CEO Mark Zuckerberg selling fraudulent dialysis treatments, or comically threatening to take over the world.
But as far as some critics are concerned, such videos are laughter in the dark. The ability to create almost completely convincing, lifelike representations, they say, could have chilling implications in many areas of life.
Deepfake videos could put incriminating words into the mouths of politicians – or create opportunity for politicians to dismiss their actual words or actions on the grounds they were deepfaked. There’s also a significant fraud potential. If people can be convinced by a phishing email from someone pretending to be their bank, asking for a money transfer, then how much more compelling would it be to get a video call from a colleague or relative? The EU police agency Europol recommended in a report that law enforcement authorities should make ‘significant investments’ into developing technologies can help to detect the malicious use of deepfakes.
The risk also applies to the sinister underworld of internet pornography, where celebrities and others can be digitally inserted in explicit material without their consent. As of September 2019, almost 96% of deepfakes were being deployed in pornography, according to a report from DeepTrace.
There are those, however, who point out the threat may be less than it seems. Photo manipulation has been around since 1860 – and the politically motivated removal of purged political enemies from official photographs was extensively used in the Soviet Union under Joseph Stalin. As recently as 2004, a doctored photograph of John Kerry made the New York Times*, during that year’s presidential race. And it has been possible to lie in print for as long as there has been, well, print. So surely, argue the skeptics, it is just another technology we’ll have to learn to live with.
Spotting the fakes
Despite these assurances, it is understandable that many people would feel safer if there were fool-proof, technology-driven ways to tell a deep fake apart from the real thing. And indeed, research continues on this front. What AI can make, it can help unmake.
For instance, AI company Zeff are just one of the companies looking to keep AI detection systems one step ahead of the game. Initially, one of the tell-tale signs of a deepfake was unnatural blinking patterns. However, once blinking patterns were identified, fixes were incorporated into deepfake solutions – they now blink like real people.
In response, Zeff moved on to using AI to look at blood flow under the skin – which deepfakes still can’t properly emulate. However, speaking on the Intel on AI podcast, Zeff Chief Data Officer Ben Taylor is pragmatic about the long-term sustainability of the solution, acknowledging the cat-and-mouse nature of the field. “This is a moving goal post problem. So whatever good idea we come up with, that's an idea someone else can use to improve deepfakes”.
Another solution is to add digital layers to photographs that make them unusable for deepfake generation, starving deepfakes of their fuel. For example, independent researchers have developed an algorithmic layer that can be coded into an image. The code then distorts that image unrecognizably if it detects someone is trying to use it for a deepfake.
It’s not yet clear whether the risk posed by deepfakes will grow. We may learn to live with them, or technology may make detection easier. But perhaps the safest position to take is that of Facebook* Chief Technology Officer Mike Schroepfer: “I want to be really prepared for a lot of bad stuff that never happens, rather than the other way around.”