Adobe takes on deceptive image manipulation with AI tools
Image credit: Dreamstime
Adobe – the company whose Photoshop software is synonymous with image manipulation – has developed a neural network capable of identifying regions of images that have been altered.
“People edit images to achieve new heights of artistic expression, to preserve our history and even to find missing children,” Adobe said in a statement. “On the flipside, some people use these powerful tools to “doctor” photos for deceptive purposes.”
In particular, there have been concerns about the ease and irresponsibility with which films and photographs can be edited using machine-learning tools that allow anybody to create highly convincing fake images or 'deepfakes'. For instance, automated lip syncing could make an honest politician appear to be caught on film saying something inappropriate, untrue or offensive, whereas the proliferation of pornographic hoaxes (created by superimposing one person’s face onto another's body) has led MPs to discuss specifically criminalising the creation of explicit manipulated images.
Now, Adobe has proposed a set of machine learning tools that could eventually be used to tackle the rise of realistic manipulated images by identifying manipulated regions.
Adobe decided to train its neural network to distinguish between several common approaches to manipulation: splicing (combining different parts of images), copy-move (moving or multiplying items within an image) and removing (entirely removing an item from an image).
“Each of these techniques tend to leave certain artefacts, such as strong contrast edges, deliberately smoothed areas or different noise patterns,” said Dr Vlad Morariu, senior research scientist at Adobe.
The neural network was trained using tens of thousands of manipulated images as examples. It learnt to identify patterns associated with 'tampering artefacts' such as strong contrast differences or unnatural boundaries, and to compare the noise inconsistency between real and tampered regions of an image. Many cameras and photographs have characteristic noise patterns, which makes it possible to identify when multiple images have been combined.
For instance, if image manipulation attached the upper body of a woman to a picture of a swimming fish to create a mermaid, there would likely be clues left at the boundaries around the human upper body, perhaps in hidden layers, as well as incongruous noise patterns.
Combining both skills, the neural network was able to perform well, identifying specific regions of pictures where the manipulation had taken place. The researchers will continue to work on the neural network, training it to spot cases of JPEG compression and other tampering artefacts.
This application is not available to customers and the neural network in its current form could not be deployed to identify deepfake images. However, the research is a step forwards towards automation of a key aspect of digital forensics by the company behind image manipulation.