Well, this could be quite the problem for privacy advocates everywhere. In locations such the UK, it could cause even cause legal issues for things like the Data Protection Act. Pixelating faces is used throughout the world to conceal identities on camera. Often it’s to protect their identity. Sometimes they need protection, sometimes they’re caught in the background of footage about something else, and sometimes you just forgot to get a release signed.
Wired reports that researchers at the University of Texas at Austin and Cornell Tech claim to have trained a piece of software to see through it. The scariest thing about this is that the technology is already out there. They didn’t actually have to develop any new tools, simply feed a new set of data using existing facial recognition methods. The software then figured it out all by itself.
When the only things observing images were people, obscuring images with pixelation (or “mosaics”), blur and other tools worked well. Humans could only see what they were shown. If the face was obscured, it may be impossible to identify who the subject actually was.
These days, though, computers are scrutinising images far more than humans ever could. They also look at them very differently, too. With the power of computers today, they can run all kinds of simulations. Testing different methods and techniques in the blink of a virtual eye that might take years for humans to try and solve.
In simple terms, they fed a bunch of images into a piece of software using standard image recognition procedures. They feed in a bunch of images that are gradually more and more pixelated or blurred. Over time, the software “learns” what a person’s face looks like when obscured. This may sound impractical…
“Well, who’s going to go around and blur or mosaic everybody’s photos and feed them all into a computer?”
But here’s the thing. Nobody needs to, the entire process can be automated. The obfuscation techniques beaten are fairly standard and widespread tools. YouTube’s proprietary face blurring technology, filters built into Photoshop. Easily available, and relatively straightforward to reverse engineer.
This means that when feeding an obscured image into the software, it only needs to run the obscuring process on each of the other images it knows about in order to eventually find a match. In their tests, it detected who was in the image around 80-90% of the time. Of course, the more blur or pixelation there was, the lower the hitrate.

Credit : Richard Mcpherson, Reza Shokri, and Vitaly Shmatikov
It is important to note that the software can’t reverse the blurring or pixelation process to recreate original detail that no longer exists. That would be the work of science fiction (well, for now). But computers are getting smarter and smarter at seeing things in images that we can’t.
The researchers say that as the process uses standard software and processes, it would be relatively easy for anybody to do. That is to say, easy for anybody who would otherwise find regular facial recognition with artificial neural networks easy. Those with nefarious purposes could definitely use this to do harm.
This is why the researchers have released their research. To warn the privacy and security communities that such things are now possible, using today’s technology. It was inevitable, really, that something like this would eventually happen. It’s amazing to me that it can be performed using standard facial recognition techniques, though.
You can read the full report here. It’s a fascinating, if scary, read.
Is this another “end of privacy” moment? Can new tools be developed to obscure individuals in photographs and video footage where privacy or security is an issue? Or do you think those new tools would be doomed to the same fate? Let us know your thoughts in the comments.
[via Wired]
FIND THIS INTERESTING? SHARE IT WITH YOUR FRIENDS!