Last fall, Canadian student Colin Madland noticed that Twitter’s automatic cropping algorithm continually selected his face—not his darker-skinned colleague—from photos of the pair to display in tweets. The episode ignited accusations of bias as a flurry of Twitter users published elongated photos to see whether the AI would choose the face of a white person over a Black person or if it focused on women’s chests over their faces.
At the time, a Twitter spokesperson said assessments of the algorithm before it went live in 2018 found no evidence of race or gender bias. Now, the largest analysis of the AI to date has found the opposite: that Twitter’s algorithm favors white people over Black people. That assessment also found that the AI for predicting the most interesting part of a photo does not focus on women’s bodies over women’s faces.
Previous tests by Twitter and researcher Vinay Prabhu involved a few hundred images or fewer. The analysis released by Twitter research scientists Wednesday is based on 10,000 image pairs of people from different demographic groups to test whom the algorithm favors.
Researchers found bias when the algorithm is shown photos of people from two demographic groups. Ultimately, the algorithm picks one person whose face will appear in Twitter timelines, and some groups are better represented on the platform than others. When researchers fed a picture of a Black man and a white woman into the system, the algorithm chose to display the white woman 64 percent of the time, and the Black man only 36 percent of the time, the largest gap for any demographic groups included in the analysis. For images of a white woman and a white man, the algorithm displayed the woman 62 percent of the time. For images of a white woman and a Black woman, the algorithm displayed the white woman 57 percent of the time.
On May 5, Twitter did away with image cropping for single photos posted using the Twitter smartphone app, an approach Twitter chief design officer Dantley Davis favored since the algorithm controversy erupted last fall. The change led people to post tall photos and signaled the end of “Open for a surprise” tweets.
The so-called saliency algorithm is still in use on Twitter.com as well as for cropping multi-image tweets and creating image thumbnails. A Twitter spokesperson says excessively tall or wide photos are now center cropped, and the company plans to end use of the algorithm on the Twitter website. Saliency algorithms are trained by tracking what people look at when they look at an image.
Other sites, including Facebook and Instagram, have used AI-based automated cropping. Facebook did not respond to a request for comment.
Accusations of gender and race bias in computer vision systems are, unfortunately, fairly common. Google recently detailed efforts to improve how Android cameras work for people with dark skin. Last week the group Algorithm Watch found image labeling AI used in an iPhone labeled cartoon depictions of people with dark skin as “animal.” An Apple spokesperson declined to comment.
Regardless of the results of fairness measurements, Twitter researchers say algorithmic decision making can take choice away from users and have far-reaching impact, particularly for marginalized groups of people.
In the newly released study, Twitter researchers said they did not find evidence that the photo cropping algorithm favors women’s bodies over their faces. To determine this, they fed the algorithm 100 randomly chosen images of people identified as women, and found that only three centered bodies over faces. Researchers suggest this is due to the presence of a badge or jersey numbers on people’s chests. To conduct the study, researchers used photos from the WikiCeleb dataset; identity traits of people in the photos were taken from Wikidata.