Avatar photo

Freelance journalist

Author Bio ▼

Ron Alalouff is a journalist specialising in the fire and security markets, and a former editor of websites and magazines in the same fields.
July 20, 2020

Download

Whitepaper: Enhancing security, resilience and efficiency across a range of industries

Surveillance technology

Why AI and facial recognition software is under scrutiny for racial and gender bias

In the light of the Black Lives Matter protests, AI and facial recognition vendors and users are taking notice of concerns over racial bias and privacy, reports Ron Alalouff.

The use of artificial intelligence (AI) has come under the spotlight recently, especially how algorithms can be biased against people of colour or women. And most recently, in the wake of the Black Lives Matter campaigns following the death of George Floyd in May, tech giants such as Amazon and IBM have suspended or withdrawn their facial recognition technologies which are based on AI algorithms.

AI-RacialBias-20

In the United States the issue of bias in AI is most explosive. Writing on techcrunch.com, Miriam Vogel, President and CEO of Equal AI, believes that while racism has its historical roots, “AI now plays a role in creating, exacerbating and hiding these disparities behind the facade of a seemingly neutral, scientific machine”. She says one solution is to ensure a more diverse tech workforce – particularly in gender and race – but that recent data show that the proportion of people of colour in workforces at Google, Microsoft and Apple have broadly remained static for several years.

Vogel sets out how AI can perpetuate and even magnify unconscious bias. For example, information from law enforcement activities which target people of colour are used as data points to create AI systems. In that way, the AI already carries an element of bias when it is deployed. The results of these deployments are fed back into new data points which in turn further skew the AI, resulting in a vicious cycle of self-perpetuating bias.

The use of AI has rocketed over the past few years. According to Gartner, the number of businesses using it grew by 270% from 2015-2019. The value of the AI market is expected to grow from $9.5 billion in 2018 to a staggering $118.6 billion by 2025, according to tech researcher Omdia Tractica. And as far back as 2017, 85% of Americans were using AI in some form, says Gallup.

Questions remain around facial recognition

Unconscious bias in AI algorithms is an issue which is thrown into sharp relief in the field of facial recognition. In the UK, a number of court cases gave the green light to the principle of using face recognition for legitimate crime fighting purposes, while the Metropolitan Police deployed its first live use of the technology in east London earlier this year. But an independent report by the London Policing Ethics Panel said that the technology should be used only if it could be shown that it would not generate gender or racial bias in policing operations.

The UK’s Information Commissioner’s Office – together with the Office of the Australian Information Commissioner – has also opened a joint investigation into facial recognition systems supplier Clearview AI. The company’s facial recognition app, which is used by police and law enforcement agencies around the world, allows users to compare suspects with a database of some 3 billion images scraped from publicly available platforms such as Facebook and Google.

IBM says it’s exiting the facial recognition arena altogether. In a letter to the US Congress last month, CEO Arvind Krishna said the company firmly opposes the use of “any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and principles of trust and transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

Krishna continued: “[V]endors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularly when used in law enforcement, and that such bias testing is audited and reported… National policy also should encourage and advance uses of technology that bring greater transparency and accountability to policing, such as body cameras and modern data analytics techniques.”

Amazon, meanwhile, has suspended police use of its Rekognition software for 12 months, though it’s unclear what the details of the suspension are.

Unconscious bias in technology?

The fact that AI and facial recognition algorithms can be subject to unconscious bias in terms of race and ethnicity have been widely acknowledged and documented. Researchers from MIT and Stanford University found that three commercial facial analysis programmes from major technology companies demonstrated both skin-type and gender biases. While error rates in determining the gender of light-skinned men were never higher than 0.8%, the error rates rose to more than 34% for darker-skinned women.

The most recent of several facial recognition test programmes was reported by the National Institute of Standards and Technology in the US in December 2019.  Testing nearly 200 face recognition algorithms from almost 100 developers, it found that false positive results – which have much larger differentials than false negatives – were highest in African American, American Indian and Asian populations. NIST also recommended that operators should research the characteristics of the algorithms they use by testing them, as there was a wide variation in accuracy across the various algorithms.

A recent report by the Centre for Data Ethics and Innovation (CDEI) – an independent committee set up by the UK government – provides a very useful summary of many aspects of facial recognition: how it works, the benefits and risks of using it and the legal and regulatory framework surrounding the technology in the UK. It draws attention to the variable accuracy rate of systems dependent on ethnicity, sex and age, with often worse performance in relation to Black and Asian people, creating additional barriers for under-represented groups accessing key services.

Ironically, IBM may have got itself into hot water by trying to provide more diversity in its facial datasets for researchers. The company was criticised over its use of 1 million images obtained from the photo-sharing platform Flickr, without the consent of the photos’ subjects, for its Diversity in Faces dataset. The images were coded to describe subjects’ features such as facial geometry, skin tone, estimated age and gender. And earlier this month, two Illinois residents filed federal lawsuits against Amazon, Google’s parent company Alphabet, and Microsoft, alleging that their images were obtained without their consent in contravention of a state biometric privacy statute.

There’s no doubt that pressure from campaigners, academics and others is making developers and vendors of facial recognition software think twice about any inherent bias in their systems. But many people think that unless suppliers can clean up their act, legislation to regulate the use of facial recognition systems is the only way to deal with inherently biased algorithms and the potential misuse of facial recognition technology.

Free Download: The Video Surveillance Report 2023

Discover the latest developments in the rapidly-evolving video surveillance sector by downloading the 2023 Video Surveillance Report. Over 500 responses to our survey, which come from integrators to consultants and heads of security, inform our analysis of the latest trends including AI, the state of the video surveillance market, uptake of the cloud, and the wider economic and geopolitical events impacting the sector!

Download for FREE to discover top industry insight around the latest innovations in video surveillance systems.

VideoSurveillanceReport-FrontCover-23

Related Topics

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments