Nightshade: Artists’ Weapon Against Image-Generating AI Theft!

TLDR:

  • Computer scientists at the University of Chicago have developed a tool called Nightshade that can “poison” digital artwork to prevent image-generating AI models from engaging in intellectual property theft.
  • Nightshade is part of the Glaze Project, a project led by professor Ben Zhao, which aims to alter how AI training algorithms perceive the style and content of digital artwork.

A group of computer scientists from the University of Chicago has developed a new tool called Nightshade, which aims to protect digital artwork from theft by corrupting image-generating AI models. Nightshade is part of the Glaze Project, led by professor Ben Zhao, which focuses on altering how AI training algorithms perceive the style and content of digital artwork. This tool is meant to “poison” digital artwork, making it detrimental for training image-generating AI models that engage in intellectual property theft, such as DALL-E, Midjourney, and Stable Diffusion.

Nightshade is built using the open-source machine learning framework Pytorch and tags images at the pixel level. While the tags are not obvious to humans viewing the images, AI models see them differently and are adversely affected in their training process. The Glaze Project also developed another tool called Glaze, which convinces AI training models that they are seeing a different artistic style than what a human would see. For example, Glaze can make an AI model think a “glazed” charcoal drawing is actually an oil painting.

However, Nightshade goes a step further and convinces AI models that the content of an image is something different than what a human would see. This means that an AI model could be trained to interpret a picture of a cat as a picture of a dog. When a user inputs a text prompt requesting a picture of a cat, the AI model would instead generate an image of a dog.

The Glaze and Nightshade tools add some noise and distortion to digital images, but the level of distortion can be adjusted by the user. The Glaze Project is not against AI, and the tools have been developed to create an ecosystem where users of image-generating programs would need approval from rightsholders to access unaltered training images.

According to professor Ben Zhao, the primary goals of the Glaze Project are not profit, but rather to discover and learn new things through research and make a positive impact on the world. The Nightshade tool is part of their efforts to combat intellectual property theft in the AI and digital art landscape.