📢 Countdown: Just 1 Week Left! Are You Ready?
🗓 On November 14, @Gate_Ventures and @HackQuest_ are joining forces for the #WEB3 DEV HUDDLE# side event at Gaysorn Tower in Bangkok, Thailand!
🔥We’re excited to have @ZKcandyHQ, @iGAM3_ai, @flow_blockchain, @botanika_sol and @kol4u_xyz as our gold sp
MIT unveils PhotoGuard tech that protects images from malicious AI edits
Written by: Andrew Tarantola
Source: Engadget
Image source: Generated by Unbounded AI tool
Dall-E and Stable Diffusion are just the beginning. Chatbots on the internet are gaining the ability to edit and create pictures, with companies like Shutterstock and Adobe leading the way, as AI-generated systems gain popularity and companies work to differentiate their products from those of their competitors . But these new AI capabilities also pose familiar problems, such as unauthorized tampering or outright misappropriation of existing online works and images. Watermarking technology can help reduce the latter problem, while new "PhotoGuard" technology developed by MIT CSAIL can help us prevent the former.
It is reported that PhotoGuard works by changing some pixels in the image, thereby destroying the ability of AI to understand the content of the image. These "perturbations," as the research team calls them, are invisible to the human eye but easy to read for machines. The "encoding" attack method that introduces these artifacts targets the algorithmic model's underlying representation of the target image -- the complex mathematics that describe the position and color of each pixel in the image -- essentially preventing the AI from understanding what it is looking at . (Note: Artifacts refer to various forms of images that do not exist in the scanned object but appear on the image.)
In addition, more advanced and computationally intensive "diffusion" attack methods disguise an image as another image for the AI's eyes. It will define a target image and optimize perturbations in its image to be similar to the target image. Any edits the AI tries to make on these "immune" images are applied to the fake "target" images, producing images that don't look real.
"The encoder attack makes the model think that the input image (to be edited) is some other image (such as a grayscale image)," Hadi Salman, a Ph.D. student at MIT and the paper's first author, told Engadget. "The Diffusion attack forces the Diffusion model to edit some of the target images, which can also be some gray or random images." Protected images for reverse engineering.
"A collaborative approach involving model developers, social media platforms, and policymakers can be an effective defense against unauthorized manipulation of images. Addressing this pressing issue is critical today," Salman said in a release. "While I'm excited to be able to contribute to this solution, there is still a lot of work to do to make this protection practical. Companies developing these models need to invest in targeting the threats these AI tools may pose for robust immune engineering."