Twitter faced a big backlash owing to its automated image-cropping algorithm. Apparently, it skewed in favor of white people. However, new research suggests the Artificial Intelligence (AI) marginalizes a lot more groups of people.
It seems there was a lot more wrong with the algorithm that Twitter deployed to automatically focus on people’s faces. Apart from preferring white people over black people, the AI had some major implicit bias about a lot more groups such as old, differently-abled, or of a particular ethnicity.
Twitter’s now-defunct AI-trained algorithm was every tech company’s modern-day nightmare:
Every business, be it offline or online, dreads being branded “biased or prejudiced”. Companies try exceptionally hard to ensure consumers consider them “inclusive” towards all.
Customers routinely boycott organizations accused of marginalizing certain sections of society. Needless to say, tech companies are no exception.
An unusual contest found that a Twitter algorithm that identifies the most important areas of an image discriminates by age and weight, and favors English and other Western languages. https://t.co/Tz8zPbLtW5
— WIRED (@WIRED) August 9, 2021
Twitter introduced a helpful tool last year that assisted in cropping images. However, the algorithm appeared to discriminate against Black people.
Several social media users quickly went to work and tried to check if the image-cropping algorithm was biased. There was a common consensus that the algorithm favored white people.
Beauty filters can fool Twitter's AI, @hiddenmarkov found in a Twitter contest to spot algorithm bias. Twitter's image cropping tool is more excited by faces that look slim and young, with lighter and warmer skin tones. tip @Techmeme https://t.co/u9B6nWyWBW
— Stephen Shankland (@stshank) August 9, 2021
Users would deliberately tweet photos that included both a white and a Black person. However, the algorithm would almost always automatically highlight the white person.
Twitter has taken down the algorithm, but the incident exposed the murky underworld of AI training models which “teach” algorithms.
A Twitter image-cropping algorithm that went viral last year when users discovered it preferred white people to Black people was also coded with implicit bias against a number of other groups, researchers have found. https://t.co/ZelPzgmY62
— NBC News (@NBCNews) August 9, 2021
In the tech world, experts call it “Biased AI”. Incidentally, computer programs that learn from users and their behavioral patterns almost invariably introduce some kind of unintended bias, claimed Parham Aarabi, a professor at the University of Toronto and director of its Applied AI Group.
The Applied AI Group studies and consults on biases in artificial intelligence. The team observed: “Almost every major AI system we’ve tested for major tech companies, we find significant biases.”
“Biased AI is one of those things that no one’s really fully tested, and when you do, just like Twitter’s finding, you’ll find major biases exist across the board.”
Besides being “racist”, Twitter’s image-cropping algorithm was also ageist, ableist, and more?
Upon realizing that there was something amiss, Twitter organized a contest in an effort to hunt for such implicit biases. It is abundantly clear that the micro-blogging network never intentionally trained the algorithm to marginalize certain sections of society.
Parham Aarabi’s team won second place. It found that the algorithm was biased against multiple groups of people. Some reports claim, apart from being racist, the algorithm was also ageist and ableist. The algorithm even seemed to avoid people who covered their heads.
.@pandeyparul summarizes the issues with Twitter’s Image Cropping algorithm, the findings of their research team, and how they intend to bring more transparency around their existing machine learning (ML) systems https://t.co/sE85IlFhex
— Towards Data Science (@TDataScience) August 4, 2021
For example, the algorithm seemed to remove people in wheelchairs, people with greying or white hair. The algorithm would also tend to ignore people who wore head coverings.
Participants were given access to the code underlying Twitter’s algorithm for image cropping. https://t.co/n8IB0wX0Um
— Emerging Tech Brew ☕ (@etechbrew) August 9, 2021
The team that won first place in Twitter’s contest concluded that the algorithm preferred slimmer, younger, or softer skin tones. It is important to note that Twitter has a team devoted to machine learning ethics. Twitter claimed it wanted to “set a precedent for proactive and collective identification of algorithmic harms.”