Google has commenced trials of an innovative digital watermark system designed to identify images produced by artificial intelligence (AI).
Spearheaded by DeepMind, Google’s AI division, the watermarking technology, called SynthID, aims to expose images created by machines and provide a crucial tool in the fight against misleading content.
The mechanism behind SynthID involves intricate alterations to individual pixels within images. These changes render the watermarks invisible to the human eye while remaining detectable by computers. Despite its limitations, it represents a novel approach to discerning between human-generated and AI-generated visuals.
The technology signifies a step forward in addressing the escalating challenge of distinguishing genuine photographs from artificially generated ones.
AI-powered image generators, such as Midjourney, have achieved widespread popularity. Midjourney now boasts a user base exceeding 14.5 million individuals. It is estimated that more AI images have been generated in one year than photographs were taken in the first 150 years of photographic history.
Google’s own image generator, Imagen, has prompted the development of its watermarking system. This system, responsible for crafting and validating watermarks, will solely pertain to images originating from the Imagen tool. Watermarks, conventionally utilized as indicators of ownership, also serve the purpose of hindering unauthorized image replication and utilization.
Despite existing watermarking methods, their suitability in identifying AI-generated images remains limited due to their ability to be manipulated. In contrast, Google’s SynthID introduces an inconspicuous watermark that users can trust to tell the authenticity of an image, even when subjected to post-processing modifications.
Pushmeet Kohli, the Head of Research at DeepMind, emphasized the subtlety of their system’s modifications, ensuring minimal discernible alterations to the human eye. This characteristic sets it apart from hashing techniques, which can be compromised through cropping or editing.
Kohli explained to the BBC, “You can change the colour, you can change the contrast, you can even resize it… [and DeepMind] will still be able to see that it is AI-generated,” he said. He cautioned, however, that the current launch of the system is experimental, necessitating user engagement to enhance its resilience.
In the broader tech landscape, companies such as Microsoft and Amazon have also pledged to incorporate watermarks to distinguish AI-generated materials. Beyond static images, Meta has unveiled plans to integrate watermarks within its unreleased video generator, Make-A-Video, responding to the demand for transparency in AI-generated multimedia.
In contrast, China has taken a different approach, banning AI-generated images devoid of watermarks starting this year.
There seems to be a growing consensus that there needs to be greater transparency overall in AI-generated content. This surely is a good sign.