Twitter announced that it would investigate its neural network responsible for generating its photo review after users of the platform called out the apparent racial bias in its selection of pictures.

The social media platform uses a neural network, an automated system to run parts of its operations and that includes the photo review feature, that supposedly selects white people faces more often than black faces.


Public Experiments on Twitter's Preview Images

The recent Twitter fiasco started when users experimented how the social media platform chooses the photo to use in its previous - with photos of white people appearing more frequently. User @bascule wrote: "Trying a horrible experiment..." and asked which will the Twitter algorithm pick: US Senator Mitch McConnell or former president Barack Obama. The first image was a long roll that had Senator McConnell on top and former president Obama at the bottom. The second image featured the same people, but with Obama on top and McConnell at the bottom.

In the photo previews, it both showed the Senator McConnell.

RELATED: Identifying Twitter Troll Messages Using This New Strategy

  


The public scrutiny reportedly came after another Twitter user, Colin Madland (@colinmadland), complained about Zoom's facial recognition feature, which does not show the face of his Black colleague, despite suggesting the use of a virtual background or fixing the lighting. When Madland tried it on Twitter, he saw that it showed his white face over his Black colleague's.

  


Another user, Jordan Simonovski (@_jsimonovski), tried it on cartoon characters, using The Simpsons' Lenny Leonard, who was Caucasian, albeit yellow-skinned, and his best friend Carl Carlson, who was a Black character. Regardless of the arrangement of their photos, previews still showed Lenny on both instances.

Other experiments to test the photo preview feature of Twitter included manipulating the images of Carl and Lenny to switch their colors, only to still display Lenny. Another user tried it with black and white dogs, to similar results.


Twitter To Investigate the Matter

The flaw could be partly explained by a 2018 blog post from Twitter's machine learning researchers. In a January 2018 Twitter blog article, researchers explained that they started with facial recognition to start identifying how and where to crop images, but was mainly constrained by the fact that not all images featured faces. They were trying to avoid scenarios where they misdetect faces—fail to see where there are faces or detect one where there is none—explaining that these would lead to "awkwardly cropped" preview images.

RELATED: 17-Year-Old Mastermind of Hacking High-Profile Twitter Accounts, Two Others Arrested

Dantley Davis, Twitter's chief design officer, tweeted that the social media company is now investigating the neural network in relation to the issue. Also, he shared the results of his own experiments, noting that it was an "isolated example," stressing the need to look at some variables. However, the results of his experiments were refuted by another user, who compared the two images with varying combinations of suits and background colors.

  


Parag Agrawal, Chief Technology Officer (CTO) at Twitter, retweeted the thread from Machine Learning scientist Vinay Prabhu. Agrawal noted that "this is a very important question," adding his appreciation for the public, open, and rigorous test.

 

Check out more news and information on Social Media on Science Times.