Press "Enter" to skip to content

Twitter To Investigate Racial Bias In Image Preview

Twitter To Investigate Racial Bias In Image Preview. Twitter has been seen to be biased in his image previews. A lot of users see it as its intentionally done, the first look a Twitter user gets at a tweet might be an unintentionally racially biased one.

Twitter To Investigate Racial Bias In Image Preview

However, this issue pops up over the weekend when Twitter users posted several examples of how in an image featuring a photo of a Black person and a photo of a white person. Twitter’s preview of the photo in the timeline more frequently displayed the white person.

Twitter To Investigate Racial Bias In Image Preview

The public tests got Twitter’s attention – and now the company is apparently taking action. Twitter responds on Sunday that it would investigate whether the neural network that selects which part of an image to show in a photo preview favors showing the faces of white people over Black people.

One user found that Twitter seemed to favor showing Mitch McConnell’s face over Barack Obama’s. Twitter is investigating after users discovered its picture-cropping algorithm sometimes prefers white faces to black ones. Users noticed when two photos – one of blackface the other of a white one – were in the same post, Twitter often showed only the white face on mobile.

However, Twitter said it had tested for racial and gender bias during the algorithm’s development. But it added: “It’s clear that we’ve got more analysis to do.” Twitter’s chief technology officer, Parag Agrawal, tweeted: “We did an analysis on our model when we shipped it – but [it] needs continuous improvement. “Love this public, open, and rigorous test – and eager to learn from this.”

Facial hair

The latest controversy began when university manager Colin Madland, from Vancouver, was troubleshooting a colleague’s head vanishing when using videoconference app Zoom. The software was apparently mistakenly identifying the black man’s head as part of the background and removing it.

But when Mr. Madland posted about the topic on Twitter, he found his face – and not his colleague – was consistently chosen as the preview on mobile apps, even if he flipped the order of the images. His discovery prompted a range of other experiments by users, which, for example, suggested:

White US Senate majority leader Mitch McConnell’s face was preferred to black former US President Barack Obama’s. A stock photo of a white man in a suit was preferred to one in which the man was black. Twitter’s chief design officer, Dantley Davis, found editing out Mr. Madland’s facial hair and glasses seemed to correct the problem – “because of the contrast with his skin”.

According to Liz Kelly, a member of the Twitter communication team. “Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing,”  “But it’s clear from these examples that we’ve got more analysis to do. We’re looking into this and will continue to share what we learn and what actions we take.”

Twitter’s Chief Design Officer Dantley Davis and Chief Technology Officer Parag Agrawal also chimed in on Twitter, saying they’re “investigating” the neural network.

The Twitter Neutral Network Racial Bias

The conversation started when one Twitter user initially posted about racial bias on Zoom’s facial detection. He noticed that the side-by-side image of him (a white man) and his Black colleague repeatedly showed his face in previews.

According to a tweet on Twitter, After multiple users did the testing, one user even showed how the favoring of lighter faces was the case with characters from The Simpsons.

However, Twitter’s promise to investigate is essential, but Twitter users should view the analyses with more concentration. It’s problematic to claim incidences of bias from a handful of examples. To really assess bias, researchers need a large sample size with multiple examples under a variety of circumstances.

As reported by Mashable, Anything else is making claims of bias by anecdote something conservatives do to claim anti-conservative bias on social media. These sorts of arguments can be harmful because people can usually find one or two examples of just about anything to prove a point, which undermines the authority of actually rigorous analysis.

Also as reported by Mashable, That doesn’t mean the previews question is not worth looking into, as this could be an example of algorithms bias; When automated systems reflect the biases of their human makers, or make decisions that have biased implications.

Twitter Blog Post On Neutral Network Preview Decisions

In 2018, Twitter published a blog post, that explained how it used a neural network to make photo previews decisions. One of the factors that cause the system to select a part of an image is higher contrast levels. This could account for why the system appears to favor white faces. This decision to use contrast as a determining factor might not be intentionally racist, but more frequently displaying white faces than black ones is a biased result.

There’s still a question of whether these anecdotal examples reflect a systemic problem. But responding to Twitter sleuths with gratitude and action is a good place to start no matter what. For further reading click on either of the link below

https://www.google.com/amp/s/mashable.com/article/twitter-photo-preview-algorithmic-racial-bias.amp

https://www.bbc.co.uk/news/amp/technology-54234822

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *