FACIAL RECOGNITION

A window to more than the soul? AI research raises troubling questions about privacy, profiling and wrongful arrest

Avatar photo

Contributor

Author Bio ▼

Adam Bannister is a contributor to IFSEC Global, having been in the role of Editor from 2014 through to November 2019. Adam also had stints as a journalist at cybersecurity publication, The Daily Swig, and as Managing Editor at Dynamis Online Media Group.
September 13, 2017

Download

Whitepaper: Enhancing security, resilience and efficiency across a range of industries

When Apple introduced TouchID fingerprint readers to the iPhone in 2013 it seemed to suggest the end was nigh for password authentication.

Now the latest incarnation of Apple’s flagship product – the iPhone X – will liberate users even from the need to press a button, by automatically verifying their identity using infrared and 3D sensors within the phone’s front-facing camera.

However, the growing sophistication of facial recognition has generated considerable alarm in the media this week, even if it promises to eventually free us from the tyranny of myriad passwords and the ‘forgotten password’ process.

A lawyer writing in the Guardian – who admits she will still buy an iPhone X – says “we cannot become complacent to the serious privacy risks it often poses – or think that all its applications are alike.”

AI ‘gaydar’

A Stanford University professor has been criticized by privacy campaigners and the LGBT community after developing AI can accurately predict people’s IQ, political leanings and whether they are gay or straight.

“The face is an observable proxy for a wide range of factors, like your life history, your development factors, whether you’re healthy,” said Michal Kosinski, who says his research will stimulate much-needed debate about creating regulatory safeguards to protect citizens’ privacy.

AI could identify people with psychopathic traits – a troubling prospect given that most psychopaths do not commit serious crimes and many murderers aren’t believed to be psychopaths

Inevitably dubbed AI ‘gaydar’, the AI correctly identified sexual orientation 91% of the time with men and 83% with women based on a few photos of each face.

That his research also suggests a link between facial features and political beliefs lends credence to the theory that ideological outlook is to some extent heritable.

Kosinski, an assistant professor of organisational behaviour, said other studies found that conservative politicians tend to be more physically attractive than liberals, because the right-wing world view tends to be against income redistribution and good-looking people are on average more successful in life.

Kosinksi’s findings also raise the spectre of schools screening prospective students using facial recognition AI. “We should be thinking about what to do to make sure we don’t end up in a world where better genes means a better life,” he said.

Psychopathic traits

Kosinski also says AI could conceivably identify people with psychopathic traits – a troubling prospect given that most psychopaths do not commit serious crimes and many murderers are not believed to be psychopaths. “Even people highly disposed to committing a crime are very unlikely to commit a crime,” he said.

He also says that AI could weed out potential troublemakers upon entry to concerts and nightclubs in the way that bouncers or security guards make similar subjective judgements based on body language, clothing and signs of drunkenness.

Facial recognition is being rolled out across Australian airports next year in a move that could shorten queues and prevent people travelling on fake passports. However, like many technologies, it will be enormously useful to authoritarian governments.

A face recognition system deployed by the Met Police at Notting Hill Carnival reportedly had a success rate of 2.86%: some 35 people were misidentified, while only one correct match was achieved

The Chinese government has identified jaywalkers using facial recognition software, while in Russian anti-corruption protesters have been screened.

More than half of all American adults are in a face recognition database because they have a driver’s license.

Retailers also use the technology to identify shoplifters though privacy campaigners are warning they could put it to a more sinister use: profiling individual shoppers in terms of how often they visit the store, how often they stay and how often they buy something. A phone app developed in Russia allows strangers to find out who you are just by taking your photo.

The applications in the physical access control industry are obvious. Cards and fobs pose a risk because people can lose them, whereas facial recognition seems almost impossible to fool – at least on the face of it, so to speak.

Wrongful arrest

But the TDSis and Honeywells of this world are unlikely to abandon the humble access card just yet; distance, angle and lighting are just three of many variables that affect facial recognition’s accuracy.

Nevertheless, even though law enforcement claims to act only on matches that achieve a high ‘match threshold’, Clare Garvie, associate with the Center on Privacy & Technology at Georgetown Law, thinks the technology could lead to wrongful arrest.

In her Guardian article she says: “The system was designed so that people who looked 50% or more similar to the wanted suspect were flagged as a possible match. This means that a vast number of ‘possible matches’ will be completely innocent people. These are the face recognition systems where a mistake could mean you are investigated, if not arrested and charged, for a crime you didn’t commit.”

A face recognition system deployed by the Met Police at Notting Hill Carnival reportedly had a success rate of 2.86%: some 35 people were misidentified, while only one correct match was achieved.

Algorithms are often flawed, as former NSA security officer David Venable told IFSEC Global: The typical big data formula is a bunch of data plus a bunch of algorithms equals big data. But based on real world results, we’re looking at bad data and biased algorithms, which I’m terming ‘big delusion’.”

Subscribe to the IFSEC Insider weekly newsletters

Enjoy the latest fire and security news, updates and expert opinions sent straight to your inbox with IFSEC Insider's essential weekly newsletters. Subscribe today to make sure you're never left behind by the fast-evolving industry landscape.

Sign up now!

man reading a tablet, probably the IFSEC Global newsletter

Related Topics

Subscribe
Notify of
guest
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Hana
Hana
June 8, 2019 10:45 pm

“Kosinski also says AI could conceivably identify people with psychopathic traits – a troubling prospect given that most psychopaths do not commit serious crimes and many murderers are not believed to be psychopaths. “Even people highly disposed to committing a crime are very unlikely to commit a crime,” he said.””
You do NOT understand “PSYCHOPATH”. Psychopaths are PREDATORS. They mean to kill or deceive for pleasures or for their own selfish advantages . They are serial liars and are very destructive to the society. I was a victim of psychopath. @unmaskDeception
https://twitter.com/unmaskDeception