Facial recognition can be used for good causes, but very often it’s not. You can bypass it by wearing masks or using lasers, but there’s now a way subtler tactics. A team of researchers from the University of Chicago has created Fawkes, the system that confuses facial recognition while still making you look as you. So if you’re concerned about facial recognition software scraping your public photos, Fawkes is free for you to download and use.
The most interesting feature of Fawkes is that your photo still looks the same after it’s cloaked. You don’t have to pixelate it or change your appearance in any way. After all, facial recognition systems can identify you even if your face is pixelated. The trick is in changing the pixels in your digital image. These changes are so tiny that they’re invisible to the human eye, yet they’re enough to make the image recognition software unusable. You can then use this cloaked image as you normally would: upload it to social media, share it with someone, upload it to different sites as a profile photo…
“The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, “cloaked” images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable by humans or machines and will not cause errors in model training. However, when someone tries to identify you by presenting an unaltered, “uncloaked” image of you (e.g. a photo taken in public) to the model, the model will fail to recognize you.”
One of very interesting questions raised by Fawkes’ development is its relation to Clearview.ai? Was it designed as a response to it? The researchers claim that it wasn’t. “It might surprise some to learn that we started the Fawkes project a while before the New York Times article that profiled Clearview.ai in February 2020,” the team writes:
“Our original goal was to serve as a preventative measure for Internet users to inoculate themselves against the possibility of some third-party, unauthorized model. Imagine our surprise when we learned 3 months into our project that such companies already existed, and had already built up a powerful model trained from massive troves of online photos.”
The researchers believe that Clearview.ai is likely “only the tip of the iceberg,” and a rather large one. But the goal of Fawkes is to protect you from Clearview.ai as well as other similar systems. For now, it’s very difficult, or rather impossible, to distinguish cloaked images from original ones. After all, that’s the whole point. However, the researchers consider adding small markers to help users identify cloaked photos. There’s still more information to come about it, though.
If you’d like to read more about Fawkes, you can find the paper here and more information here. As I mentioned, the software is already online and if you’d like to use it, it’s free to download for both Windows and iOS.