
As AI-generated content is getting everywhere, the concern about its potential misuse is increasing. To address this issue, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed “PhotoGuard,” technology aimed at protecting your photos from unauthorized and malicious edits in just a few clicks.
[Related reading: Battling misinformation: tech giants to watermark AI-generated content]
As we’ve discussed before, these new and increasingly popular AI generators open the door to potential problems, including unauthorized alteration or image and artwork theft. Of course, editing photos was around even before Photoshop, but modern AI tools have made it easier and faster than ever before.
How does PhotoGuard work?
While watermarking techniques help mitigate some potential issues, the team behind PhotoGuard took a different approach. Their tool protects images by making tiny changes to certain pixels, which makes it impossible for AI algorithms to understand or manipulate the image. These changes, called “perturbations,” act like a shield, ensuring the image remains visually unchanged to the human eye.
The PhotoGuard technique employs two attack methods to safeguard images from malicious edits:
- Encoder attack: This method “makes the model think that the input image (to be edited) is some other image (e.g. a gray image),” as MIT doctorate student and lead author of the paper, Hadi Salman, told Engadget. “Whereas the diffusion attack forces the diffusion model to make edits towards some target image (which can also be some grey or random image).
- Diffusion attack: This method disguises an image to AI systems, making it appear as a different one. It defines a target image and optimizes the perturbations in the original image to resemble the target. Any edits that AI attempts on these “immunized” images will be applied to the fake “target” images, producing unrealistic and misleading generated visuals.
You can see PhotoGuard in action in this video and get a better idea of how it works:
Hadi Salman, a doctoral student at MIT and the lead author of the PhotoGuard research paper, explained the significance of PhotoGuard, saying, “A collaborative approach involving model developers, social media platforms, and policymakers presents a robust defense against unauthorized image manipulation. Working on this pressing issue is of paramount importance today.”
Potential drawbacks
While PhotoGuard is a great concept, and it seems to be working wonderfully, it’s worth noting that it’s not entirely foolproof. Those who want to alter your photos can still try to reverse-engineer the protected images by adding digital noise, cropping, or flipping the picture.
To make this protection practical and effective, Salman emphasized the need for companies developing AI models to invest in robust immunization techniques against potential threats that AI tools pose. As I discussed in the article about watermarking AI-generated content, this should be a joint effort, and PhotoGuard is certainly one step toward it.
[via Engadget]
FIND THIS INTERESTING? SHARE IT WITH YOUR FRIENDS!