ImageNet, one of the largest publicly accessible online databases of photos, is removing 600,000 images from its system. In other words, that’s as many as half of the 1.5 million images in its “person” categories. The decision came after an art project ImageNetRoulette revealed racist and gender bias that underlines ImageNet’s artificial intelligence.
ImageNet Roulette is an initiative by artist Trevor Paglen and the AI researcher Kate Crawford. Their project allowed people to upload photos of themselves or add photos from an URL, which would then be assessed by ImageNet technology. It was launched last week as part of an exhibition at the Fondazione Prada in Milan titled “Training Humans.”
Sometimes, results would be funny and amusing. But other times, they would show the inherent bias in the AI system trained by humans. As Artsy notes, a woman might be labeled as a “slut,” and some African American users reported being labeled as a “wrongdoer.”
https://twitter.com/nickstatt/status/1173750841657847808
No matter what kind of image I upload, ImageNet Roulette, which categorizes people based on an AI that knows 2500 tags, only sees me as Black, Black African, Negroid or Negro.
Some of the other possible tags, for example, are “Doctor,” “Parent” or “Handsome.” pic.twitter.com/wkjHPzl3kP
— Lil Uzi Hurt ? (@lostblackboy) September 18, 2019
“ImageNet Roulette regularly returns racist, misogynistic and cruel results,” the website reads. “That is because of the underlying data set it is drawing on, which is ImageNet’s ‘Person’ categories.” The project was designed to show some of the underlying problems with how AI is classifying people. And according to its creators, ImageNet Roulette has achieved its goals.
Paglen told The Art Newspaper that their exhibition “shows how these images are part of a long tradition of capturing people’s images without their consent, in order to classify, segment, and often stereotype them in ways that evoke colonial projects of the past.”
Starting Friday, 27 September, ImageNet Roulette will longer be available online because it’s proven its point and achieved its goal. But I got to play with it a bit and got to some… interesting results.
I tested it with photos of some actors and actresses that first crossed my mind. They include Salma Hayek, Monica Belucci, Charlize Theron, Octavia Spencer, Sigourney Weaver, Morgan Freeman, Adrien Brody, and Jared Leto.
No matter how many photos of Octavia Spencer and Morgan Freeman I entered, the result would always be “Black African” or “black man/black woman.” Entering photos of actresses, who are all beautiful, talented and successful women, would often return the result simply saying “face,” or “woman.” Belucci was once identified as “smasher, stunner, knockout, beauty, ravisher, sweetheart, peach, lulu.” Okay, she is gorgeous, but she’s so much more than that. Salma Hayek got a “cat sitter/critter sitter” in one attempt, and Charlize Theron was described as “eccentric.” Uploading photos of white male actors, on the other hand, gave me result such as “sociologist,” “rich man,” “successful.”
I also tried a few photos of painter Frida Kahlo and her partner, painter Diego Rivera. She was identified as either “suffragette” or simply “face,” while her significant other was recognized as a “sociologist” or “physicist.”
And finally, I uploaded a few photos of myself. Most of the times I was just “woman,” “face” or favorite: “whiteface.” I mean, what the hell? I only got a profession suggestion once, and it was “trumpeter, cornetist.” I also uploaded a photo I took of myself when I was goofing around with makeup and a toy dragon pretending to be Daenerys Targaryen. I was labeled as “orphan.”
It wasn’t so long ago that we saw an example of a “racist” image recognition system, and it wasn’t the only one. Even though artificial intelligence improves over the years, it still turns out to be pretty stupid or offensive in some situations. My assumption is that it will still take time before labeling people in an offensive manner stops happening. And by this, I am not just referring to AI. But that’s a whole other story.
Update: we were contacted by ImageNet for some clarification. The ImageNet team says that they didn’t delete the images due to the ImageNet Roulette’s results. They began this work in January, two months before Roulette’s release in March, and submitted a paper as part of these efforts in August which is summarized here.
The ImageNet team also received an NSF grant last year, a further illustration of the team’s long-standing research to address bias in AI systems.
[via Artsy, The Art Newspaper; lead image credits: John Mathew Smith (left), Georges Biard/Wikimedia Commons (right)]
FIND THIS INTERESTING? SHARE IT WITH YOUR FRIENDS!