Avatar photo

Freelance journalist

Author Bio ▼

Ron Alalouff is a journalist specialising in the fire and security markets, and a former editor of websites and magazines in the same fields.
November 26, 2020

Sign up to free email newsletters


Whitepaper: Multi-residential access management – The move to digital

Facial Recognition

Automatic Facial Recognition: authentication, identification and ethical use

Legal, ethical and privacy considerations should be at the heart of deploying automatic facial recognition (AFR) technology, participants in a recent British Security Industry Association (BSIA) webinar concluded, as Ron Alalouff reports.

Chairing the webinar, BSIA Membership Development Manager Tom Ford set the scene saying that over the last few years, the debate on where and how AFR is used has become a hot topic. In response to this, the BSIA is developing guidance on the authentication, identification and ethical use of AFR, which is due to be published later this year or early in 2021.

Pauline Norstrom, CEO of Anekanta Consulting and one of the authors of the guide, said the technology can be a force for good. It can speed up and make contactless the process of access control in environments such as schools, or can provide security for applications such as mobile banking where it is more secure than chip and pin. Criticism of AFR happens, however, when it’s implemented without due regard to privacy and ethical considerations. “The ethical questions which arise relate to proportionality, data privacy impact, who controls the databases which may contain personally identifying information and, most importantly, who makes the decision.”

Nick Fisher, Chief Executive of retail facial recognition provider Facewatch, said the technology is definitely here to stay, and that the main way it has been deployed in retail environments is for prevention and deterrence rather than as a crime-reporting tool. “[The concerns], in my opinion, relate more to the data aspects of AFR rather than how the technology actually works. The challenge is for the technology to be used in a very responsible and transparent way.”

Effectiveness of automatic facial recognition

Jake Parker, Senior Director of Government Relations with the Security Industry Association in the US, said his organisation had recently commissioned an opinion poll on facial recognition, which found that six out of 10 Americans support the technology in general, and seven out of 10 would support using it to enhance security at their workplace. He said that AFR has been successfully used in law enforcement for over a decade, mostly searching against local databases of arrest photos.

The New York Police Department in 2019, for example, searched their system 10,000 times, looking at 2,500 potential suspects including possible matches in 68 murders, 66 rapes, 277 felony assaults and hundreds of robberies. It’s also been used in over 40,000 cases of human trafficking in North America, he said, helping rescue 15,000 children over the last five years, and is being used by US Customs in airports and at border crossings.

The technology is also growing in commercial security and access control, particularly as a contactless interface for dealing with the re-opening of buildings during COVID-19, flow management for office buildings, as well as multi-factor authentication for entering secure areas of buildings. There have also been some AFR pilot studies in schools.

The biggest concern about AFR, according to members of the panel, was on the issue of privacy and databases, so it is important to align all these processes with relevant privacy legislation. The contentious issues, continued Pauline Norstrom, relate to watchlists which may be gathered by public authorities from several sources. In the case of police watchlists, which have attracted so much attention, policies need to be robust and transparent. This is especially the case when using the technology to find, for example, missing persons. That may save a life – even though that person may not have given their consent, though it’s the police’s duty to find them.

The technology should not be used to reinforce bad policy, but humans are flawed and humans design technology. But if the design is agnostic and the use is unbiased, then there shouldn’t be an issue. Even without AFR, watchlists would still exist – they pre-date the technology.

Referring to the forthcoming BSIA guide, she said: “We are really clear in our advice that there must be a human in the loop at the final determination. It’s very important that this remains the case to avoid scenarios where someone is wrongly identified as a criminal. We’d like the right people to be identified, and they are not necessarily criminals.”

There’s no national privacy law such as GDPR in the United States, said Jake Parker, but you have to have a solid use policy with a clearly defined purpose, use limitations and enrolment, backed up by written policies which are easy to understand, and clear criteria to justify inclusion on the list. These values are embodied in the SIA’s Principles for the Responsible and Effective Use of Facial Recognition Technology, which advocates values such as transparency, clear and defined purpose, data security, human oversight and non-discrimination.

Racial bias in AI?

The question of whether the algorithms behind AFR produce racially biased results was also addressed by the panel. Jake Parker referred extensively to a study carried out by the National Institute of Standards and Technology (NIST), which he said demonstrates that the highest performing technologies do not have any discernible differences across racial groups, though there were some quite concerning differences in the lower performing algorithms.

“One question is whether the technology itself is somehow inherently flawed? I think that simply isn’t true. The industry has a little way to go to make sure there’s as little variation as possible, and we’re exploring ways of doing that. But it’s getting to the point that facial recognition is almost as accurate as automated fingerprint comparison on a number of measures. Many law enforcement users think [the technology] actually mitigates human bias that may be present from investigators or even eye-witnesses.”


Nick Fisher added that the quality of algorithms have improved 30 to 40-fold since he’d started in the industry in 2017, saying that human bias was more likely than bias in the technology. “One of the key developments is the database that you use to teach your algorithm to recognise different faces, and how deep and how broad is that database, therefore enhancing its accuracy.”

He also emphasised the need for algorithms to be tested in the context of the environment in which they are going to be deployed. “If you’re using a camera that’s at the wrong angle, or the lens quality is wrong, then you can have the best algorithm in the world but it will generate lots of false positives.”

Returning to the forthcoming BSIA guide, Pauline Norstrom said it has been designed to provide “a clear decision-making journey” to ensure that the impact on the individual is considered before any deployment. It’s aligned with the Organisation for Economic Co-operation and Development’s (OECD) five key principles for the ethical use of AI, which stress the need for clear accountability when developing, deploying or operating AI-based systems, including defining when a human should be in the loop of the final determination.

“The guidance does take you down this journey whereby you look at privacy first, you perform the DPIA [Data Privacy Impact Assessment]…That is how to ensure that there’s ethical deployment. If you don’t do that then the organisation deploying could be at risk of doing that unethically and unlawfully. The answer is in the guide – this is why we felt it was so important to get this out and into circulation. The last thing that we want is a ban on this technology, because it has incredible uses and it’s very effective.”

Watch the full webinar from the BSIA, here. 

Free Download: The Video Surveillance Report 2023

Discover the latest developments in the rapidly-evolving video surveillance sector by downloading the 2023 Video Surveillance Report. Over 500 responses to our survey, which come from integrators to consultants and heads of security, inform our analysis of the latest trends including AI, the state of the video surveillance market, uptake of the cloud, and the wider economic and geopolitical events impacting the sector!

Download for FREE to discover top industry insight around the latest innovations in video surveillance systems.


Related Topics

Notify of
Inline Feedbacks
View all comments