DeepMind develops watermark to identify AI images
Image credit: iStock
Google's DeepMind is trialling a digital watermark that would allow computers to spot images made by artificial intelligence (AI), as a means to fight disinformation.
The tool, named SynthID, will embed changes to individual pixels in images, creating a watermark that can be identified by computers but remains invisible to the human eye.
Nonetheless, DeepMind has warned that the tool is not “foolproof against extreme image manipulation”.
The beta version of SynthID is currently available for select users of Vertex AI (Google’s platform for building AI apps and models) and can only be applied to Imagen, Google’s AI image generator.
“While generative AI can unlock huge creative potential, it also presents new risks, like enabling creators to spread false information – both intentionally or unintentionally,” DeepMind writes in a blog post.
“Being able to identify AI-generated content is critical to empowering people with knowledge of when they’re interacting with generated media, and for helping prevent the spread of misinformation.”
The company has explained that the tool relies on two separate algorithms: one to identify AI-generated images and another to create the watermarks. DeepMind said that the watermark remains in place even if the image is edited or modified.
“SynthID isn’t foolproof against extreme image manipulations, but it does provide a promising technical approach for empowering people and organisations to work with AI-generated content responsibly,” the company said. “This tool could also evolve alongside other AI models and modalities beyond imagery such as audio, video and text.”
The rise in popularity of AI-generation tools such as Midjourney and ChatGPT has raised concerns among activists and experts, particularly following the popularity of deepfake images showing former US President Donald Trump getting arrested, Elon Musk walking with congresswoman Alexandria Ocasio-Cortez, and Pope Francis wearing a puffer jacket.
Recently, China’s Cyberspace Administration issued regulations requiring tech companies to mark AI-generated content. The country also asked companies to ensure that the data being used to train AI models will not discriminate against people based on aspects such as ethnicity, race and gender.
Earlier this year, E&T used AI tools to create a cover and editorial for an issue of the magazine. We also presented images created by Midjourney AI in response to E&T prompts to a group of art critics and asked them for a professional review.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.