Rekognition

Amazon’s mugshot AI fail and 5 other facial recognition controversies

Adam Bannister

Editor, IFSEC Global

Author Bio ▼

Adam Bannister is editor of IFSEC Global. A former managing editor at Dynamis Online Media Group, he has been at the helm of the UK's leading fire and security publication since 2014.
August 2, 2018

Sign up to free email newsletters

Download

Mobile access case study: University of Hull students impressed with HID Global upgrade

Amazon’s facial recognition software erroneously flagged 28 members of Congress as having been arrested for crimes during tests run by the American Civil Liberties Union (ACLU).

The retail giant’s Rekognition platform also disproportionately misidentified people of colour in a database of mugshots. People of colour who were falsely identified accounted for 40% of the wrongly-matched faces – 11 of 28 – even though they make up only 20% of US Congress.

The group cross-referenced a database of 25,000 public arrest photos with public photos of every member of the US House and Senate.

Leading privacy campaigners have urged CEO Jeff Bezos to suspend sales to government and police agencies.

“Our test reinforces that face surveillance is not safe for government use.” Jacob Snow, technology and civil liberties attorney, ACLU Foundation of Northern California

“Our test reinforces that face surveillance is not safe for government use,” Jacob Snow, a technology and civil liberties attorney at the ACLU Foundation of Northern California, said in a statement.

“Face surveillance will be used to power discriminatory surveillance and policing that targets communities of colour, immigrants, and activists. Once unleashed, that damage can’t be undone.”

The ACLU said it used the default or ‘out of the box’ match settings set by Amazon. Responding in a statement Amazon said that “when using facial recognition for law enforcement activities, we guide customers to set a threshold of at least 95% or higher.”

Three Congress members wrote to Bezos requesting copies of internal accuracy or bias assessments  Amazon had conducted on Rekognition and a list of police or intelligence agencies either using the software or that had enquired about doing so.

Although facial recognition technology has made great strides in recent decades, performance remains uneven. There have been several other high-profile controversies over AI-driven platforms – many related to racial or gender bias.

1. Beauty contest AI judge shows racial bias

The first international beauty contest judged by ‘machines’ disproportionately favoured contestants with white skin.

Launched in 2016 Beauty.AI supposedly measures beauty by factors like facial symmetry and wrinkles. But nearly all of the 44 winners were white, with a handful being Asian and only one having dark skin.

Although most contestants were white, many people of colour submitted photos, with substantial cohorts from India and Africa.

Around 6,000 people from more than 100 countries submitted photos.

2. White males recognised more readily than women or non-whites

Facial recognition software correctly identified white males 99% of the time but darker skinned women only 65% of the time, according to a study by the M.I.T. Media Lab.

The dataset that trained the software was disproportionately white and male.

Another research study found that one widely used facial-recognition data set was more than 75% male and more than 80% white.

3. Metropolitan Police’s facial recognition technology failed 98% of the time

Facial recognition software used by the Met police had false positives in more than 98% of alerts generated, a freedom of information request showed.

The UK’s biometrics regulator said it was “not yet fit for use” after the system had only two positive matches from 104 alerts.

The Met Police said it did not consider the inaccurate matches “false positives” since alerts were then checked a second time.

Another system used by South Wales Police achieved only 10% of correct matches from 234 alerts.

These shortcomings explain why the Met Police still values its super recogniser unit, which was born in the wake of the 2011 London riots, so highly.

Facial recognition software identified only one culprit in the 2011 riots, compared to 609 by the super recognisers, the unit’s founder DCI Neville has pointed out.

Although that was now seven years ago, facial recognition still has a huge gulf to bridge, he insisted. Humans have a particular advantage in assessing people from side-on views and can even identify people from the back of their head alone.

4. iPhone X Facial ID fooled within week of launch by 10-year old boy

Cybersecurity researchers fooled Apple’s Face ID just a week after the launch of the iPhone X.

More embarrassing still for the tech giant was the revelation that a 10-year old boy had managed to circumvent the security system.

5. Apple suffers embarrassing demo Face ID fail at iPhone X launch

Apple’s senior vice president of software engineering Craig Federighi was left flustered during the tech giant’s latest iPhone launch when he couldn’t unlock an iPhone X using Apple’s new facial recognition software.

“Unlocking it is as easy as looking at it and swiping up,” he explained when demonstrating Face ID – only to find that it wouldn’t comply with his attempts to unlock the phone.

Upon being prompted to enter his passcode, Federighi was forced to resort to a backup device.

Free Download: SMARTair wireless locking system brings real-time monitoring to new co-working space

Discover how ASSA ABLOY met ULab’s need for a flexible, wire-free access control system to maximise security, accommodate future expansion and complement the workspace’s sleek, contemporary design by clicking here.

Related Topics

Leave a Reply

1 Comment on "Amazon’s mugshot AI fail and 5 other facial recognition controversies"

avatar
  Subscribe  
newest oldest most voted
Notify of
Josh Davis
Guest

Super-recognisers can also recognise baby faces better than controls; and super-recognisers with regular exposure to babies in their workplace may be even more accurate. Why does this matter? The identification of child victims and in particular child sexual exploitation victims on video. It is not clear if face recognition technology has the ability to distinguish between child faces as effectively as with adults (although the stats above may be alarming) – but humans can….. New paper just published in Cortex https://www.sciencedirect.com/science/article/pii/S0010945218302259