Facebook recently found itself in the middle of yet another scandal. This time, it’s not about the data leak, but about a racist message users were seeing under a video featuring Black men. Facebook’s AI labeled it as “Primates,” causing a fierce backlash.
According to the New York Times, users who watched a video from 27 June posted by Daily Mail received an auto-prompt with the racist message. It was asking them whether they wanted to “keep seeing videos about Primates.” In the video, there were reportedly “Black men in altercations with white civilians and police officers,” and it had absolutely nothing to do with primates.
Facebook AI asks users if they want to “keep seeing videos about Primates" when watching "clips of Black men in altercations with white civilians and police officers." https://t.co/TsMSJ4FF04 pic.twitter.com/aKeAeyLhbT
— Richard Hanania (@RichardHanania) September 4, 2021
When Facebook figured out what happened, it simply disabled the topic recommendation feature. However, it looks like it took them a while since the feature was only disabled a few days ago. Speaking to The Verge, a Facebook spokesperson said that it was “clearly an unacceptable error.” As if we didn’t know that already. The company is reportedly “investigating the cause to prevent the behavior from happening again.”
“As we have said, while we have made improvements to our AI we know it’s not perfect and we have more progress to make. We apologize to anyone who may have seen these offensive recommendations.”
Speaking of “not perfect” and “more progress to make,” remember when Facebook’s AI thought that a photo of onions was porn? That’s my favorite story ever, along with dunes mistaken for nudes. But that’s all fun and kinda cute even – unlike calling Black people “primates.”
I know it’s artificial intelligence and it can’t be flawless. Perhaps you remember Google’s AI tagging Black people as “gorillas?” But that was in 2015, and one would expect this kind of AI to improve over the course of six years.
[via The Verge]