is part of the Informa Markets Division of Informa PLC
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
Facial recognition is creating headlines. It’s a controversial technology, with huge implications for privacy and security; nonetheless, the facial recognition market is expected to grow to around $7.7 billion in 2022. It’s a big business and is being used the world over as a sophisticated security solution.
But how does it work, and what are the implications for our privacy and security?
Facial recognition is a pretty simple concept. It works more-or-less exactly how it sounds, by taking an image of a face and comparing it to a database of faces, paying particularly close attention to a face’s distinct features such as the nose, eyes and mouth. It’s not a million miles away from a Snapchat or Instagram filter many of us are familiar with, but when used by law enforcement it can play a key role in identifying and capturing suspects.
The process goes something like this:
Most facial recognition software relies on simple 2D images. However, many systems are increasingly using 3D image capture and IR cameras to make better, more sophisticated matches.
A new technique called ‘lidar’ covers your face with a wall of lasers, which then reflects off the surface and is picked up by an infrared camera, thereby making an accurate 3D map of your face. This can considerably improve the accuracy of facial recognition software. Infrared can also be used in thermal imagining, and thermal facial recognition is beginning to take off as a way of identifying faces in low light. However, until thermal imaging becomes more widespread, 3D facial recognition cannot be used in darkness.
Another emerging technology is Surface Texture Analysis. This uses a picture of a patch of skin – a skin print – and breaks it up into smaller blocks to distinguish lines, pores and textures unique to that patch. This has been used to identify the difference between identical twins, something that current facial recognition software struggles with.
Facial recognition software is everywhere. As mentioned above, Snapchat and Instagram use primitive versions of the technology to add filters to your face – every time you add dog ears and a long tongue to your selfie, Instagram has used facial recognition software to detect your eyes and mouth. In addition, users of the iPhone X and subsequent models now use facial recognition – called Face ID on Apple devices – to unlock their phones.
But it is being increasingly used for security purposes. Facial recognition software has become adept at matching faces spotted on CCTV cameras to police databases. The UK police are trialling the software across the country, although it has faced considerable criticism for the manner it has been rolled out. In addition, the US Department of Homeland Security now deploys facial recognition in airports, making their first-ever arrest using the technology in August 2018.
IFSEC International, has exhibitors that manufacture the latest facial recognition technology and companies that use it. You can find some of them here.
Its more benign uses including identifying age, gender and ethnicity in crowds to let marketers and advertisers better target their audience, and by airlines to match faces to passports.
Facial recognition is still relatively new, so it has many shortcomings. For instance, it can still be totally baffled by a simple pair of sunglasses, long hair and low resolution, rendering many of the less sophisticated systems useless. Also, poor lighting conditions and odd facial positioning may change the way the software measures your face, making it more difficult to match it to a database.
And the database itself can be a barrier. If a database contains relatively few facial images or several incorrectly-identified faces, it can fail to find a match, or misidentify a face. Moreover, limitations in data processing can make the process of matching a face laborious, making it useless in a high-pressure situation.
Facial recognition software comes with a whole host of concerns. Critics fear data collected by the technology could be stolen by hackers or misused by government and private agencies, while others cite the lack of transparency surrounding its usage.
Furthermore, there are concerns surrounding the ownership of your image rights. Some social media sites may have asked you to relinquish these rights upon registration, perhaps long before facial recognition software became widespread. As a result, you may have unwittingly given up ownership to anything captured by this technology without any knowledge of its usage.
In the United Kingdom, the use of facial recognition software by police agencies has proven contentious. In October this year, the Information Commissioner Elizabeth Denham warned that the technology may have been used unlawfully by the police. “Police forces must provide demonstrably sound evidence to show that live facial recognition (LFR) technology is strictly necessary, balanced and effective in each specific context in which it is deployed,” she said.
Earlier in the year, a man was fined £90 for hiding his face from facial recognition cameras during a software trial in Romford, Essex. Privacy campaigners claim he was well within his rights, citing the lack of legislation surrounding the technology and the high number of false identifications; police, however, insists he was acting suspiciously by covering his face.
Predictably, firms have begun developing devices to protect your identity from facial recognition software.
For instance, a collaborative effort between two American universities, Carnegie Mellon University and the University of North Carolina at Chapel Hill, has produced anti-facial recognition or ‘adversarial’ glasses that baffle facial recognition systems and make them incapable of identifying a face. According to the researchers, these devices can simply be 3D printed by any user. However, less sophisticated software can easily be fooled by ordinary glasses, masks and so on.
In addition, social networking sites including Facebook allow you to opt-out of facial recognition systems; it’s wise, however, to be extra careful about the imagery you choose to share on social media.
Attending IFSEC can help improve your knowledge on how this technology is used currently and any updates to legislation surrounding facial recognition. You can see discover more about facial recognition at IFSEC here.
IoT-connected devices such as tablets, phones and games consoles frequently use facial recognition technology, and many of them consistently take photo and video of your face as you use the device (check your app permissions!). This makes these devices ripe for abuse by hackers, but plenty of security solutions exist to protect your network and protect it from intruders.
A recent report by NIST found that, in 2018, facial recognition algorithms failed to match faces correctly in only 0.2% searches in a database of 26.6 million photos. In 2014, it was 4%. This is an astonishing improvement, down in party to the sophisticated algorithms technology companies are developing.
Facial recognition is also proving successful at fighting crime. For instance, in 2016, FBI software identified the man responsible for the Brussels terror attacks; as the technology improves, more suspects will be identified more accurately, making it easier to apprehend serious criminals.
However, as the technology becomes more ubiquitous, privacy concerns will become more acute. Governments and regulators need to provide robust guidelines to users and make it easy for ordinary people to obtain the data that firms and government agencies are storing about them. Nonetheless, the facial recognition market is an exciting one for security professionals.