EXPLORING THE AI WORLD
SPONSORED

Ad

AI News
Last update on October 25, 2023

Nightshade Poisons Images to Fight Unauthorized AI Data Scraping

4 min read
Nightshade results

A new tool named Nightshade was unveiled today that promises to help visual artists protect their intellectual property from being used to train AI image generative models without permission by poisoning the data in the images and making them useless for AI training. 

Furthermore, this technology is said to be proven to cause real damage to text-to-image generators, significantly affecting their ability to synthesize concepts after ingesting a relatively small amount of poisoned data. 

Developers behind Nightshade state this tool is intended to even out the visual AI playfield in favor of artists and discourage unauthorized use of copyrighted images in AI training datasets in a context where legal frameworks are in their infancy and the only regulations in place are virtually unenforceable. 

Nightshade: Intentionally Corrupted Image Data to Confuse AI Models

A group of Ph.D. students and computer science professors from the University of Chicago released earlier this year a tool named Glaze that helped photographers and visual artists protect their work from being replicated by AI image generators by applying invisible changes to the image’s pixels that distort the information about the artistic style. They liken it to a “style cloak” applied to images that makes a pencil sketch-styled portrait be interpreted as an impressionist painting by any AI model, for example. 

The same group has now unveiled Nightshade, a tool that takes the protection a step forward to turn it into a full-blown offensive mechanism that attacks AI image generators from within training datasets. 

Using the same base of invisible pixel alteration technology, Nightshade corrupts an image’s data so that AI generative models –like Midjourney, Stable Diffusion, or Dall-E– read incorrect information from them. For example, a picture of a dog is corrupted so that when an AI model interprets it, it learns that it depicts a cat. 

Nightshade Poison Data

This not only deems the images useless for AI image training, but it’s also a sabotaging method: feeding Nightshade “poisoned” images to an AI model can considerably affect its effectiveness, and it doesn’t take a lot of images to achieve it. 

According to the researcher’s paper, 50 corrupted images are enough to cause an image generator to distort results, 100 poisoned files can make the tool generate mostly wrong images, and by 300 Nightshade-d pictures, the model will be generating entirely incorrect concepts. 

At this time, Nightshade is not available to the public, but they intend to add it to Glaze so that artists have the option to include invisible data corruption in their images on top of the style cloak. 

A Weapon for Artists to Fight Copyright Infringement by AI Developers

The research group made it clear that they conceived this new tool as a last resource method for artists to defend themselves from AI developing companies’ indiscriminate data scraping and usage of their work in training datasets for AI models, all done without permission and without compensation. 

They also wanted to balance out the power dynamic in this dispute, where individual artists are Davids to AI developing labs’ Goliaths. 

Basically, the tool wasn’t created for malicious attacks against generative AI tools but to prevent and discourage unauthorized scraping and using copyrighted work to train AI models. 

This is certainly an interesting technology to surge in the current state of the visual AI field. Legal frames are proven insufficient, relevant actions to create more encompassing laws –such as the Content Authenticity Initiative impulsed by Adobe–  are still in the early days, and the few limitations and protocols to stop unauthorized data scraping are based on the honor code and don’t have real ways to be legally enforced or verified. 

Plus, as the researchers also cleverly pointed out, poisoned images are only a threat to AI developers who deliberately work with unlicensed and non-cleared content for their training datasets. Big companies priding themselves in being as legally clean as possible, who only work with licensed datasets, won’t ever be at risk of feeding their model a purposely corrupted image. 

Would you use Nightshade to protect your work? Share your thoughts!

THE AUTHOR

Ivanna Attie

All About Ivanna

I am an experienced author with expertise in digital communication, stock media, design, and creative tools. I have closely followed and reported on AI developments in this field since its early days. I have gained valuable industry insight through my work with leading digital media professionals since 2014.

  • NETWORKING
dbe2c03e44dbeafe8a6f2f01c8ee8b46?s=60&d=mm&r=g 8d81f4eb775ee56f1b22ac1fcfa505b6?s=60&d=mm&r=g

AI Insights from Experts

AI Secrets is a platform for tech decision-makers to learn about AI technology. Our team includes experts such as Amos Struck (20+ yrs ICT, Stock Photo, AI), Ivanna Attie (expert in digital comms, design, stock media), and more who share their views on AI.

About us

Most recent news

PARTNER AD
Try Shutterstock's new AI Image Generator for free!
AI Secrets

Get AI news in your inbox & join thousands of engineers and managers using AI to boost sales and grow market share