EXPLORING THE AI WORLD
SPONSORED

Ad

AI News
Last update on August 1, 2023

PhotoGuard: MIT-Designed AI to Prevent Unauthorized AI Image Manipulation

4 min read
PhotoGuard
mit csail logo

Protective technologies against AI image alteration and potential misinformation from AI-generated images are as high-demand as AI image editing and generation. 

MIT CSAIL has joined the discussion by presenting PhotoGuard, a new AI model that protects images from unauthorized or undesired AI manipulation. 

Using sophisticated tech that attacks AI generative models’ abilities but remains invisible to the human eye, PhotoGuard is meant to provide a way to avoid deep fakes and other photorealistic AI-generated media that could be used with malicious intent. 

Interested? So are we! Let’s learn more. 

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

The Need for Transparency and Avoiding Potential Malicious Uses of Generative AI and AI Image Editing

Ever since AI image generators disrupted the visual world –and it wasn’t that long ago!– and as generative AI models for imagery keep improving and becoming capable of producing highly realistic pictures, there has been a growing concern for the potential of abuse of the technology: misinformation, manipulation of public sentiment, defamation, even blackmail are all possible and quite serious risks stemming from publicly-available tools that can produce photorealistic images that aren’t in fact real. 

More than one company and entity has taken the initiative to ensure AI image generation is used responsibly. Popular AI image generators like Dall-E 2, Midjourney, or Stable Diffusion have incorporated user guidelines and filters to stop people from generating potentially troublesome content. 

Image licensing specialists that have developed their own generative AI apps, such as Shutterstock or Adobe, have taken the same and some extra steps, adding mandatory disclosure tags for AI-generated images available on their catalogs. Plus, Adobe has impulsed a Content Authenticity Initiative with tens of other related companies. It is also trying to establish its Content Credentials as an industry standard, aiming to make it perfectly clear when an image is AI-generated, no matter how realistic it looks. 

Finally, some companies are employing watermarks and other “traditional” resources to protect their visual content from AI image alterations. 

But MIT’s computer science specialists have devised an innovative solution in PhotoGuard. 

PhotoGuard: A Solution to Stop Generative AI from Editing Pictures

PhotoGuard is an AI software that helps avoid inappropriate AI image editing. It is developed by the MIT CSAIL (Computer Science and Artificial Intelligence Laboratory), more precisely by a team of researchers led by doctorate student Hadi Salman, and including graduate students and authors Alaa Khaddaj, Guillaume Leclerc, Andrew Ilyas, and Aleksander Madry. 

They presented this tool in their research paper titled “Raising the Cost of Malicious AI-Powered Image Editing” and also at the International Conference on Machine Learning last July. 

However, the software isn’t ready for deployment or available to the public yet. According to the authors in conversation with VentureBeat, transforming this AI model into an effective app would require creating versions of it specific to each AI image generator or AI image editor. As that would involve closing agreements with the developers of the leading generative AI apps and establishing policies and protocols, it would take some time. 

A Two-Fold Model to Fool AI Image Editors

It’s a rather interesting and innovative solution. AI image generators understand visual content through complex algorithms that use a mathematical description for the placement and color details of every pixel in an image. PhotoGuard makes critical modifications in those mathematical descriptions –which are invisible to the human eye– to sabotage AI image editings on them: the alterations stop generative AI models from “understanding” them correctly and, thus, make it impossible for them to edit said images with accurate results. 

It uses two different approaches: The Encoder method, in which the software identifies the generative AI model’s representation of the target image and applies the artifacts strategically to fool the model and stop it from manipulating the image, and the Diffusion method, where minuscule perturbations are distributed across the image to create a fake version of it that confuses an AI generative model, making them create nonsensical or completely deviated results that don’t meet the intended editing purpose.

Both these methods work in tandem to secure an image from unauthorized AI-based alterations by making changes that AI models can perceive but which are imperceptible to human observers. The latter is more sophisticated and computation intensive, but the researchers say they have found a way to simplify the method’s necessary steps while keeping the effectiveness of it. 

We are certainly intrigued by PhotoGuard and its functionality. What do you think?

THE AUTHOR

Ivanna Attie

All About Ivanna

I am an experienced author with expertise in digital communication, stock media, design, and creative tools. I have closely followed and reported on AI developments in this field since its early days. I have gained valuable industry insight through my work with leading digital media professionals since 2014.

  • NETWORKING
dbe2c03e44dbeafe8a6f2f01c8ee8b46?s=60&d=mm&r=g 8d81f4eb775ee56f1b22ac1fcfa505b6?s=60&d=mm&r=g

AI Insights from Experts

AI Secrets is a platform for tech decision-makers to learn about AI technology. Our team includes experts such as Amos Struck (20+ yrs ICT, Stock Photo, AI), Ivanna Attie (expert in digital comms, design, stock media), and more who share their views on AI.

About us

Most recent news

PARTNER AD
Try Shutterstock's new AI Image Generator for free!
AI Secrets

Get AI news in your inbox & join thousands of engineers and managers using AI to boost sales and grow market share