OpenAI announced that images generated with its latest AI image generation model Dall-E 3, will now include the C2PA’s standard for provenance metadata and watermark, clearly stating they are AI-generated images.
This intends to increase the trustworthiness of the company’s AI tool and the transparency surrounding machine-made content; several other companies have already started implementing watermarking for AI-generated imagery, as well as adopting the C2PA’s provenance standards, for ethical reasons but also to comply with the incipient legal frame for AI-generated media.
According to the company’s updated FAQ article, images generated with their latest text-to-image generator, Dall-E 3, be it on ChatGPT or through the model’s API, will, from now on, have metadata embedded following the standard for provenance metadata impulsed by the Coalition for Content Provenance and Authenticity (C2PA).
The C2PA is co-founded by Adobe, Microsoft, Intel, BBC, Truepic, and Arm. It developed a metadata structure to help everyone know the origin and nature of images, known as Content Credentials that they push to install as an industry standard. Many high-profile companies have already integrated Content Credentials into their policy, as Adobe and other firms promote its adoption through the Content Authenticity Initiative (CAI).
The new watermarks from Dall-E 3 include the official Content Credentials logo (CR) visible on the images and the embedded metadata detailing basic information such as time and date of creation and, crucially, the AI-generated nature of the media. Users can find this information in the images using tools like the Content Credentials Verify website.
As the firm acknowledges in its communication about the new watermarks, image metadata is far from an infallible method. It can easily be removed or bypassed.
Still, OpenAI considers implementing provenance credentials into its model’s generated pictures and motivating its users to work with them as a way to improve how responsibly people use this kind of content and how trustworthy digital media is as a whole.
Equally relevant, competing firms like Meta or Google already use AI-disclosing watermarks, whether C2PA-compliant or proprietary ones. This isn’t just to be more transparent but also to anticipate upcoming regulations for AI-generated content, which are in the early days but certainly developing worldwide. In the US, for example, the government issued an executive order at the end of 2023 to provide safety guidelines for AI, which includes, precisely, the unequivocal identification of AI-generated media as such.
Do you find Content Credentials and invisible watermarks useful? Share your thoughts!
I am an experienced author with expertise in digital communication, stock media, design, and creative tools. I have closely followed and reported on AI developments in this field since its early days. I have gained valuable industry insight through my work with leading digital media professionals since 2014.
AI Secrets is a platform for tech decision-makers to learn about AI technology. Our team includes experts such as Amos Struck (20+ yrs ICT, Stock Photo, AI), Ivanna Attie (expert in digital comms, design, stock media), and more who share their views on AI.
Get AI news in your inbox & join thousands of engineers and managers using AI to boost sales and grow market share