
Deepfakes are simultaneously both an amazing technological achievement and also scary as hell. The potential for movies and VFX work is huge, as has been proven by a number of YouTube channels showing deep fakes vs the CG versions of characters that have reappeared in various Star Wars movies. But there are obvious nefarious implications of such technology, too.
While the recreations of deceased actors in movies are most definitely fake, deepfake videos of those still living are not always so obvious. Tech gains like Microsoft and Facebook have been working on counter-technology to spot deepfakes to help prevent the spread of misinformation online. A group of researches from UC San Diego has proven, though, that they’re far from perfect.
Engadget explains that most such detectors work by sending cropped portions of facial data to a neural network. As the deepfake is based on actual, real human facial movements, the detector checks for things that deepfakes don’t reproduce all that well – like blinking. The scientists at UC San Diego managed to find a way of getting around them by inserting data into every frame to fool the detectors.
To use these deepfake detectors in practice, we argue that it is essential to evaluate them against an adaptive adversary who is aware of these defenses and is intentionally trying to foil these defenses. We show that the current state of the art methods for deepfake detection can be easily bypassed if the adversary has complete or even partial knowledge of the detector.
The proposed solution seems fairly straightforward. Train the detectors better using a process similar to adversary training. This is where an adaptive adversary keeps generating new deepfakes that can bypass the detector while it’s being trained. This would allow it to continue to improve in spotting the tricky deepfakes without flagging genuine footage as fake.
You can read more about the research on the UC San Diego News Center website.
[via Engadget]
FIND THIS INTERESTING? SHARE IT WITH YOUR FRIENDS!