Scientists prove that deepfake detectors aren’t perfect and can still be tricked

John Aldred

John Aldred is a photographer with over 25 years of experience in the portrait and commercial worlds. He is based in Scotland and has been an early adopter – and occasional beta tester – of almost every digital imaging technology in that time. As well as his creative visual work, John uses 3D printing, electronics and programming to create his own photography and filmmaking tools and consults for a number of brands across the industry.

Deepfakes are simultaneously both an amazing technological achievement and also scary as hell. The potential for movies and VFX work is huge, as has been proven by a number of YouTube channels showing deep fakes vs the CG versions of characters that have reappeared in various Star Wars movies. But there are obvious nefarious implications of such technology, too.

While the recreations of deceased actors in movies are most definitely fake, deepfake videos of those still living are not always so obvious. Tech gains like Microsoft and Facebook have been working on counter-technology to spot deepfakes to help prevent the spread of misinformation online. A group of researches from UC San Diego has proven, though, that they’re far from perfect.

Engadget explains that most such detectors work by sending cropped portions of facial data to a neural network. As the deepfake is based on actual, real human facial movements, the detector checks for things that deepfakes don’t reproduce all that well – like blinking. The scientists at UC San Diego managed to find a way of getting around them by inserting data into every frame to fool the detectors.

To use these deepfake detectors in practice, we argue that it is essential to evaluate them against an adaptive adversary who is aware of these defenses and is intentionally trying to foil these defenses. We show that the current state of the art methods for deepfake detection can be easily bypassed if the adversary has complete or even partial knowledge of the detector.

The proposed solution seems fairly straightforward. Train the detectors better using a process similar to adversary training. This is where an adaptive adversary keeps generating new deepfakes that can bypass the detector while it’s being trained. This would allow it to continue to improve in spotting the tricky deepfakes without flagging genuine footage as fake.

You can read more about the research on the UC San Diego News Center website.

[via Engadget]


Filed Under:

Tagged With:

Find this interesting? Share it with your friends!

John Aldred

John Aldred

John Aldred is a photographer with over 25 years of experience in the portrait and commercial worlds. He is based in Scotland and has been an early adopter – and occasional beta tester – of almost every digital imaging technology in that time. As well as his creative visual work, John uses 3D printing, electronics and programming to create his own photography and filmmaking tools and consults for a number of brands across the industry.

Join the Discussion

DIYP Comment Policy
Be nice, be on-topic, no personal information or flames.

Leave a Reply

Your email address will not be published. Required fields are marked *