Google apparently is not the most politically-correct mind on the planet. As a recent incident with the Google Photos app illustrates, the artificial intelligence engine is still learning…and making giant mistakes along the way.
Computer programmer and hobbyist photographer Jacky Alciné recently tweeted, “Google Photos, y’all f@#ked up. My friend’s not a gorilla,” along with a screen shot. Jacky had uploaded a photo of himself and a friend to Google Photos, and the automatic tagging feature got it completely wrong.
After the mishap, Google began racing to fix the problem. In a statement to the Wall Street Journal, they said:
“We’re appalled and genuinely sorry that this happened. There is still clearly a lot of work to do with automatic image labeling, and we’re looking at how we can prevent these types of mistakes from happening in the future.”
And they should be.
The problem with artificial intelligence is that it is just that – artificial. At the end of the day, it’s simply a bunch of ones and zeros coming together to output data. While, like human cognizance, it learns from what it sees, the problem is that it can’t explore beyond its current reach (i.e. what it sees on the Internet). It can’t walk out into the real world and discover new bits of information in the same way that a child steps beyond their front door.
Vivienne Ming, who is an artificial intelligence expert, believes this incident is the result of a biased society. The machines and algorithms are tested on the internet which, as she attests, poses a problem in recognizing non-white people because of the overwhelming data and images of white people.
As we’ve seen previously with Google’s various products, as time goes along and more incorrect data is fixed by users, the algorithms adapt and become more knowledgeable. It’s just unfortunate that this blunder had to happen along the way.
[via The Wall Street Journal]
FIND THIS INTERESTING? SHARE IT WITH YOUR FRIENDS!