A video on the ubiquitous social media platform Facebook titled “White man calls cops on black men at marina”, published by UK tabloid Daily Mail, included an automated prompt by the social network, asking users if they would like to “Keep seeing videos about Primates?”
The prompt and resulting outcry has caused the company to begin investigating and disabling the artificial intelligence-powered feature that provided the obviously offensive message.
Published on 27 June 2020, the video features clips of an altercation between a group of black men and white civilians and police officers. It has no connection to primates, monkeys or any other animal.
“White man calls cops on black men in marina” – the video in question. Image sourced from the New York Times.
The prompt asking users if they would like to see more videos of primates was probably caused by Facebook’s facial recognition technology misidentifying the black men in the video. Something that happens more often than not, and is a massive indictment on the current generation of facial recognition tech.
Facebook has one of the world’s largest repositories of user-uploaded images on which to train its facial- and object-recognition algorithms. The company uses these algorithms to, amongst other things, tailor specific content for users based on past browsing and viewing habits. Sometimes the algorithm will ask users if they would like to continue seeing posts it believes are related.
According to the New York Times, Google, Amazon and other big tech firms have come under increased scrutiny in recent years for noticeable biases within their AI systems, particularly around the recognition of different races.
Read also:Revolutionalising Legal Practice With Technology
Studies have shown that facial recognition tech is biased against people of colour, and has trouble identifying them. This has even led to incidents where black people have been arrested because of computer errors.
In 2015, Google Photos began mistakenly labelling images of black people as “gorillas,” to which Google said it was “genuinely sorry” and would work to fix the issue immediately. Wired discovered that Google “fixed” the issue by simply blocking the words “gorilla,” “chimp,” “chimpanzee,” and “monkey” from searches.
These errors stem from human limitations, especially in the corner of the software developers. With AI, data rules so if there are more white faces in the database than black faces, the software will struggle to identify black faces more.
Read also:Revolutionalising Legal Practice With Technology
The New York Times gives the example of one widely used facial recognition data set which was estimated by a research study to be more than 75% male and more than 80% white. This means that above all, any software that uses this data will recognise white men the best, and women of colour the worst.
Facebook has since apologised for what it called “an unacceptable error”, adding that the company was looking into the recommendation prompt feature to “prevent this from ever happening again.”
“As we have said, while we have made improvements to our A.I., we know it’s not perfect, and we have more progress to make. We apologize to anyone who may have seen these offensive recommendations,” Facebook spokesperson Dani Lever, said in a statement.
Kelechi Deca
Kelechi Deca has over two decades of media experience, he has traveled to over 77 countries reporting on multilateral development institutions, international business, trade, travels, culture, and diplomacy. He is also a petrol head with in-depth knowledge of automobiles and the auto industry