Google has recently introduced SynthID, a unique watermark developed by Google’s DeepMind team that is invisible to the human eye but detectable by AI tools and is intended to identify AI-generated images.
The team has been focusing on this project for several years, as they believe that it's essential to create reliable tools that can differentiate images created by AI from those created by humans, especially given the potential for misinformation and malicious intent in deepfakes and other fake pictures that are AI-made.
SynthID is a highly advanced tool that works as an invisible watermark for images made by AI.
The watermark is embedded in the image’s pixels, so It doesn't produce any visible changes to the pictures but is easily detectable by dedicated AI tools.
It is also impossible to edit out with the most common methods for watermark removal, such as cropping, resizing, or trying an image-based watermark erasing software.
This ensures the original image is protected, while AI detectors can readily distinguish if the image is authentic or AI-generated.
Google is currently cautious in revealing too much about SynthID as it is still new. In a conversation with The Verge, Google’s DeepMind leader, Demis Hassabi, acknowledged that SynthID is still a work in progress and isn't flawless yet and that the company’s primary focus is ensuring the tool functions correctly.
Initially, SynthID will be tested with Google Cloud users using the Vertex AI and Imagen image generator. Once they gather more data and understanding from real-world usage, the firm will provide further information on how SynthID functions.
Another reason to keep details about this new watermarking system vague is that, as it happens historically with antivirus software, they expect a constant struggle between the creators of SynthID and those trying to hack or manipulate the system, leading to consistent enhancements to the tool. The longer they keep the details under wraps, the higher the chances of avoiding hackers finding ways around it.
In the long run, Google hopes for SynthID to achieve the status of a universal internet standard and be used across other areas of mixed reality, such as video and text.
Google isn't alone in its efforts to create a solution to the problem of AI image detection and ethical uses of AI image generation. Other industry giants like Meta and OpenAI also promise to ramp up their AI safety systems.
Despite this, Hassabis is confident that watermarking will play a crucial role in addressing the issue regarding online image use.
There are several potential uses for SynthID beyond discrediting deepfakes, such as image verification for advertising copy or ensuring product photos aren’t confused with AI-generated images. There's also talk about incorporating SynthID into applications like Slides and Docs.
It’s said that Google will use its Cloud Next conference as a launchpad for this new tool.
What do you think about Google’s invisible watermark for AI images? Do you think it’ll be the standard watermarking tool of the future?
I am an experienced author with expertise in digital communication, stock media, design, and creative tools. I have closely followed and reported on AI developments in this field since its early days. I have gained valuable industry insight through my work with leading digital media professionals since 2014.
AI Secrets is a platform for tech decision-makers to learn about AI technology. Our team includes experts such as Amos Struck (20+ yrs ICT, Stock Photo, AI), Ivanna Attie (expert in digital comms, design, stock media), and more who share their views on AI.
Get AI news in your inbox & join thousands of engineers and managers using AI to boost sales and grow market share