As an attempt to stop fake news from spreading, Twitter is soon going to start labeling deceptive content. This includes “deceptively edited” photos, deepfake videos, and manipulated content that could cause “harm to physical safety, widespread civil unrest, voter suppression or privacy risks.”
According to Reuters, Twitter’s efforts come right before the 2020 presidential election. It’s expected that there will be a lot of manipulated and deepfake content intended to deceive the public and affect the voters. Ultimately, this could change the outcome of the elections. Deepfake videos are probably the biggest problem because they can make it look like someone said something they didn’t.
Twitter’s new policy will apply a “false” warning label to fake content. This includes “any photos or videos that have been ‘significantly and deceptively altered or fabricated,’” Reuters writes. Twitter’s head of site integrity, Yoel Roth, said: “Our focus under this policy is to look at the outcome, not how it was achieved.” He added that “the content could be removed if the text in the tweet or other contextual signals suggested it was likely to cause harm.”
The same source writes that social media have faced a lot of pressure when it comes to tackling fake and deceptive content. All of them seem to be dealing with it differently. For example, Facebook said that it would remove deepfakes and some other manipulated videos, but not all of them. “Satirical content” will reportedly stay on the platform, along with videos edited “solely to omit or change the order of words.” The latter could be especially tricky, in my opinion. Instagram labels fake content but removes it only from the Explore and hashtag pages. Sometimes it includes creative and artistic photo manipulations, too.
Unfortunately, it’s becoming harder and harder to spot deepfakes, so I wonder how Twitter and other social networks will manage to detect those that are indeed difficult to identify. But I guess we’ll have to wait and see how it goes.