Julian Hall

Freelance journalist and copywriter, Textual Healing

November 1, 2019

Sign up to free email newsletters

Download

A Barbour guide to business continuity

Is AI security overhyped?

The notion that Artificial Intelligence (AI) is changing physical security is a given. However, the debate now is about the extent to which this is true and how much hype there is around AI solutions.

Artificial IntelligenceThe importance of this discussion was highlighted in recent research that suggested 75% of IT decision-makers believed that AI was the answer to all security challenges. However, as Eset, the company that carried out the research, assert – IT teams could be putting their companies at greater risk by relying on AI and leaving themselves open to the extra threats invited by technology. This prospect was compounded by the publication of a report last year, The Malicious Use of Artificial Intelligence, which described AI as a “dual use technology” with potential military and civilian uses, akin to nuclear power, explosives and hacking tools.”

Writing in IFSEC Global, Adeesh Sharma, noted:

“With technology comes the challenge of protecting infrastructure from ‘backdoor’ entry, something that can only be ensured by tightening all loose endpoints. This is where the knowledge of cyber security comes to the fore in preventing access to systems and data, and the importance of protecting both physical and logical security assets.”

Risks though there are, AI is considered a game-changer. It’s trail-blazing effect is most evident with: crowd monitoring, with the capacity to track the movement of people with great accuracy and to detect hidden objects; the use of sensors, robots and drones for security patrols (making physical patrols redundant in some cases); video surveillance and analysis in real time, anticipating crime rather than reacting to it; facial recognition and immediate decision-making, determining, for example, whether police should be called for raised alarms.

Examples of areas where the two types of security – physical and AI – converge include keycard systems, which combine electronic systems and communication networks, and building automation systems that regulate integrated security access control, camera systems and HVAC controls all in one platform, and which communicate over a common IT network.

Some examples of AI/ML security in action appear to have an autonomous feel to them. For example, during IFSEC 2019 in June, Warren Stein of video analytics company Briefcam outlined a system that enabled police in Hartford, Conneticut to monitor a crack house and isolate suspects using various filters based on gender, clothing and so on. It didn’t require operatives to monitor the house around the clock and – highlighting the great boon that AI brings – processed through vast amounts of information in a short space of time.

Of course, the system in Hartford still required skilled human operatives to decide on the significance of the patterns emerging. Informing that kind of decision – and sifting the relevant from irrelevant – is a key benefit of AI, while its ability to zip through numerous repetitive tasks is vital to handling the vast amounts of data. This is a crucial asset in untangling and prioritising the plethora of information created by the Internet of Things, as well as surveillance material.

It’s the distinction between the two variants of AI – Machine Learning, applied to more simple repetitive tasks, and Deep Learning, taking more anticipatory measures  – and the current limitations of both that concern Pascal Geenens, EMEA Security Evangelist at cybersecurity firm Radware. Geenen wrote in a piece for IT ProPortal last year: “I am yet to see a security application or system that can intelligently adapt and evolve to different situations and not just continuously perform a single, repetitive task.”

Geenens argument strengthens the view that AI is overhyped, but he makes a key distinction between AI, including Deep Learning, and ML, Machine Learning.

“Deep learning is able to find associations in data we humans would never be able to find,” he says, “helping us reach new levels of detection we couldn’t before with traditional models and machine learning.” However, Geenens adds: “Deep Learning models are only performing well in a static environment.  Real networks are environments that are continuously changing and evolving. Deep learning cannot work fully autonomously in such environments, at least not without humans continuously improving the training sets, re-training and evaluating the model, improving the learning and resizing, re-architecting the neural networks, all the while sanitising the outputs.”

This is a useful reality check for those responsible for IT and security in terms of what to expect from their new systems. It’s not just a case of leaving security to smart algorithms, a professional, experienced human eye on decision-making will always be needed. That said, there is little doubt that – despite the cost and data management issues – investing in AI security is important. As Geenens concedes: “The only way to fight automation is with automation.”

Download the Intruder Alarm Report 2020

Download this report, produced in conjunction with Texecom, to discover how increasing processing power, accelerating broadband speeds, cloud-managed solutions and the internet of things and transforming the intruder alarm market, and whether firms are adopting these innovative new technologies.

AlarmReport-Main-19

Related Topics

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments