IFSEC Insider is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
James Moore is the Managing Editor of IFSEC Insider, the leading online publication for security and fire news in the industry.James writes, commissions, edits and produces content for IFSEC Insider, including articles, breaking news stories and exclusive industry reports. He liaises and speaks with leading industry figures, vendors and associations to ensure security and fire professionals remain abreast of all the latest developments in the sector.
IFSEC Insider sits down with Genetec’s Country Manager for UK & Ireland, Paul Dodds to discuss the good, the bad and ugly of AI applications for physical security.
IFSEC Insider (II): To set some context, how is Genetec defining Artificial Intelligence (AI)?
Paul Dodds, Country Manager for UK & Ireland, Genetec
Paul Dodds (PD): In the field of data science, Artificial General Intelligence (AGI) refers to a fully functional artificial brain that is self-aware (conscious), intelligent and that can learn, reason and understand. So, while the phrase AI has become ubiquitous, it doesn’t actually exist in the truest sense of the term.
When people talk about the AI in use today, they are in fact referring to its subsets of machine learning and deep learning. For simplicity, and because the term is now so commonly used, we at Genetec refer to AI in the broader sense.
II: What is the main difference between traditional video analytics and AI-based analytics on video surveillance systems?
PD: It largely comes down to how the algorithms are trained to learn and improve over time. AI-based analytics incorporate a subset of AI known as machine learning to learn optimal parameters for the separation and identification of data, without being programmed in advance.
For example, we’ve used machine learning in our AutoVu automation number plate recognition (ANPR) solutions to further improve accuracy and reduce the incidence of false-positive readings.
Find out more about AI, how the technology has developed and its ethical use on episode 14 of the IFSEC Insider podcast, below…
II: What do you consider the best applications of AI within physical security today?
Machines are exceptionally good at repetitive tasks and analysing large data sets (like video) and at Genetec, we think this is the place where the current state of AI can bring the biggest gains. The best use of machine, and deep learning are as tools to comb through large amounts of data to find patterns and trends that are difficult for humans to identify. The technology can also help people make predictions and/or draw conclusions.
We adopt it wherever we can to automate time-consuming processes such as “watching” hours of footage to identify a particular vehicle, monitoring occupancy levels or identifying left luggage in airports. But we’re very clear that there should always be a human in the loop. The machines can do the legwork, but a human always decides on the best next action to take.
II: Do you anticipate using large language models such as ChatGPT in your security products in the foreseeable future?
PD: No, and let me explain why. Large language models are trained to satisfy the user as their first priority, so the answers they give are not necessarily accurate or truthful. This is dangerous in a security context.
Before it can be applied in security settings, this technology must first provide reliable output. Today, ChatGPT and similar tools are all online, and every text prompt is used to train the next version. Security use cases would need to take approaches where the models are trained offline, on premises, on a contained, accurate dataset. So, while this technology is advancing fast, there’s still a lot of work to be done for it to be used widely and safely in physical security applications.
But like almost every organisation, we do incorporate large language models in the chatbot technology used to help respond to questions on our website.
II: What risks might AI-powered tools pose to security operations?
PD: A lot of risks can result from unrealistic expectations of any AI algorithm’s accuracy and capabilities. Manufacturers, integrators and end users therefore all have a responsibility to provide clarity not just on what it can do but also what it can’t.
Additionally, we need to be thinking about deepfakes which use deep learning to create convincing but entirely fictional images, videos, voices or text. Detecting deepfakes is a challenge because the technology is evolving so quickly. Right now, deepfakes train on images of the fronts of faces. So, one way to detect them is to focus on the sides of faces and heads.
As detection techniques evolve, we can foresee a future where VMS would include a deepfake detection component.
II: What misconceptions around AI do you believe need challenging?
PD: It’s easy to think AI is magic when in reality it’s statistics. It’s giving us the outcome most likely to fulfill our request, based on the data it has analysed. But it is much less sophisticated than what the movies and TV suggest.
Image credit: Kittipong Jirasukhanont/AlamyStock
The most significant concerns arise when technology providers over-promise solution capabilities, or when end-users have unrealistic expectations. There are numerous myths that I take great pleasure in busting when they arise. The main one is that AI can completely replace human security personnel. Let’s not pretend that human judgment, intuition and decision-making don’t remain crucial in a wide range of security scenarios.
Others are that AI technology is inherently secure or that it is always highly reliable and accurate out of the box. As anybody with previous experience of video analytics knows there are a lot of decisions at the procurement, deployment and training stage that dictate how effective a solution it is at solving real-world problems.
II: Is there any regulation/guidance being considered for AI-based security tools – either from governments or from the security industry itself?
PD: I’m hopeful that in time the proposed EU AI Act can do for artificial intelligence what the EU GDPR did for data protection and privacy. However, as an industry we can’t delegate this to the politicians.
Ensuring this technology is used responsibly is everyone’s role. Regulation can help maintain checks-and-balances on the road to a more connected and AI-enabled world, but we also need to innovate in order to create those solutions. Typically, government officials don’t understand the minute differences which can have significant impact and lead to unintended consequences, so it’s the responsibility of experts in the field to educate in order for laws to stay in-line with the technology.
Legislation around the ethical use of AI is a requirement to govern users (and manufacturers) and this should be developed in partnership with academia, technology experts, civil rights experts, and industry leaders. This collaboration is needed to ensure regulators fully understand the pros/cons to any technology and its deployment – and that individual privacy and liberties are appropriately protected.
Free Download: The Video Surveillance Report 2023
Discover the latest developments in the rapidly-evolving video surveillance sector by downloading the 2023 Video Surveillance Report. Over 500 responses to our survey, which come from integrators to consultants and heads of security, inform our analysis of the latest trends including AI, the state of the video surveillance market, uptake of the cloud, and the wider economic and geopolitical events impacting the sector!
Download for FREE to discover top industry insight around the latest innovations in video surveillance systems.
“There should always be a human in the loop” – AI in physical security: Uses, myths & responsibilitiesIFSEC Insider sits down with Genetec’s Paul Dodds to discuss the good, the bad and ugly of AI applications for physical security.
James Moore
IFSEC Insider | Security and Fire News and Resources
Related Topics
Integrating third generation CPTED in modern smart cities: A holistic approach to safety, inclusivity, and sustainability
Navigating physical security: Top trends in 2024
“The security industry is navigating a complex landscape” – Video Surveillance Report review with IDIS’ Jamie Barnfield