
Racial bias in AI models is something we’ve discussed before. Sadly, the problem seems to be persisting no matter how advanced AI programs get. A recent example from an Asian-American student proves this point. She uploaded her picture to an AI portrait generator, asking it to make her look more professional. Instead of formalizing her attire or something like that – AI turned her into a Caucasian woman with blue eyes.
[Related reading: Tinder tests AI photo selection to help you find love]
A 24-year-old Asian-American MIT graduate Rona Wang has been experimenting with AI portrait generators recently. She gave Playground AI a shot, asking it to create a more “professional” LinkedIn profile photo. However, the AI model threw her an unpleasant surprise. It significantly altered her ethnic appearance, giving her a fairer complexion, dark blonde hair, and blue eyes. Rona tweeted the result and sparked loads of reactions.
As Twitter users argue, the algorithm seemingly misinterpreted the “professional” prompt, transforming Rona from an Asian woman into a white woman. “I was like, ‘Wow, does this thing think I should become white to become more professional?'” Rona told Boston.com. She admits that she laughed at the result at first. However, it just confirmed a problem we’ve all seen in AI tools multiple times: racial bias.
Are AI models racist?
“It’s kind of offensive,” Rona said, commenting on whether AI models are racist. “But at the same time I don’t want to jump to conclusions that this AI must be racist.”
The founder of Playground AI, Suhail Doshi, responded directly to Rona’s tweet. He said that that “the models aren’t instructable like that so it’ll pick any generic thing based on the prompt,” and that, unfortunately, “they’re not smart enough.” He said that he’d be happy to help Rona get a result she’d wanted but “it takes a bit more effort than something like ChatGPT.”
“Fwiw, we’re quite displeased with this and hope to solve it,” he concluded.
The comments on Rona’s tweet vary, ranging from accusing AI of racism to trying to explain what stands behind this transformation. There’s an interesting discussion in this thread. One user wrote that “the problem here is not in the model but the overall perception of AI being smart.” They added that Ai “doesn’t understand nuances and sometimes even the context of an image.”
“You think this doesn’t have to do with bias even though only, what, 10% of the world has blue eyes? *and* the original photo had brown eyes?” another user replied. “To your point, I do think it’s a bad model, but… really? no bias at all?”
I know very basically and superficially how AI models work, so I honestly can’t go into too much depth. But the fact remains that this isn’t the first example of racial bias in AI. It’s also probably not the last either. The critical question is whether and how the AI industry will resolve the issue because it definitely is there and we can’t just ignore it.
[via Next Shark]
FIND THIS INTERESTING? SHARE IT WITH YOUR FRIENDS!