Twitter image-cropping algorithm raises far more red flags with its ‘Biased AI’: Not just colored skin, but multiple other groups marginalized unintentionally?

Twitter Image Cropping Algorithm Bias
Almost every AI-trained algorithm suffers from implicit bias? Pic credit: Free-Photos/Pixabay

Twitter faced a big backlash owing to its automated image-cropping algorithm. Apparently, it skewed in favor of white people. However, new research suggests the Artificial Intelligence (AI) marginalizes a lot more groups of people.

It seems there was a lot more wrong with the algorithm that Twitter deployed to automatically focus on people’s faces. Apart from preferring white people over black people, the AI had some major implicit bias about a lot more groups such as old, differently-abled, or of a particular ethnicity.

Twitter’s now-defunct AI-trained algorithm was every tech company’s modern-day nightmare:

Every business, be it offline or online, dreads being branded “biased or prejudiced”. Companies try exceptionally hard to ensure consumers consider them “inclusive” towards all.

Customers routinely boycott organizations accused of marginalizing certain sections of society. Needless to say, tech companies are no exception.

Twitter introduced a helpful tool last year that assisted in cropping images. However, the algorithm appeared to discriminate against Black people.

Several social media users quickly went to work and tried to check if the image-cropping algorithm was biased. There was a common consensus that the algorithm favored white people.

Users would deliberately tweet photos that included both a white and a Black person. However, the algorithm would almost always automatically highlight the white person.

Twitter has taken down the algorithm, but the incident exposed the murky underworld of AI training models which “teach” algorithms.

In the tech world, experts call it “Biased AI”. Incidentally, computer programs that learn from users and their behavioral patterns almost invariably introduce some kind of unintended bias, claimed Parham Aarabi, a professor at the University of Toronto and director of its Applied AI Group.

The Applied AI Group studies and consults on biases in artificial intelligence. The team observed: “Almost every major AI system we’ve tested for major tech companies, we find significant biases.”

“Biased AI is one of those things that no one’s really fully tested, and when you do, just like Twitter’s finding, you’ll find major biases exist across the board.”

Besides being “racist”, Twitter’s image-cropping algorithm was also ageist, ableist, and more?

Upon realizing that there was something amiss, Twitter organized a contest in an effort to hunt for such implicit biases. It is abundantly clear that the micro-blogging network never intentionally trained the algorithm to marginalize certain sections of society.

Parham Aarabi’s team won second place. It found that the algorithm was biased against multiple groups of people. Some reports claim, apart from being racist, the algorithm was also ageist and ableist. The algorithm even seemed to avoid people who covered their heads.

For example, the algorithm seemed to remove people in wheelchairs, people with greying or white hair. The algorithm would also tend to ignore people who wore head coverings.

The team that won first place in Twitter’s contest concluded that the algorithm preferred slimmer, younger, or softer skin tones. It is important to note that Twitter has a team devoted to machine learning ethics. Twitter claimed it wanted to “set a precedent for proactive and collective identification of algorithmic harms.”

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x