10 Oct 2023
by Nick Smith

AI in physical security: Opportunities, risks and responsibility

Guest blog by Nick Smith, Business Development Manager at Genetec #techUKCyber2023

AI, or more accurately its subsets of machine learning (ML) and deep learning (DL), stand to transform the physical security industry.

This brief primer elaborates on the potential and limitations of subsets of AI in physical security applications for public spaces. Helping security professionals to better match AI-based technologies to appropriate use cases.

What is AI in a physical security context?

Machine learning (ML) and deep learning (DL) are the subsets of AI typically used in physical security systems. These algorithms use learned data to accurately detect and classify objects. When working with data collected by physical security devices such as cameras, doors or other sensors, machine learning uses statistical techniques to solve problems, make predictions, or improve the efficiency of specific tasks. Deep learning analyses the relationship between inputs and outputs to gain new insights. Recognising objects, vehicles, and humans, or sending an alert when a barrier is breached are all examples of what this technology can do in a physical security context.

Machines are exceptionally good at repetitive tasks and analysing large data sets (like video). This is where the current state of AI can bring the biggest gains. The best use of machine and deep learning are as tools to comb through large amounts of data to find patterns and trends that are difficult for humans to identify. The technology can also help people make predictions and/or draw conclusions.

Physical security technology does not typically incorporate the subset of AI called large language models (LLM); the model used by ChatGPT and other generative AI. It is designed to satisfy the user as its first priority, so the answers it gives are not necessarily accurate or truthful. This is dangerous in a security context.

Reality Checks

Any manufacturer using AI in its offerings has a responsibility to ensure that the technology is developed and implemented in a responsible and ethical way.

Here are a few of the biggest misconceptions about AI in physical security that must be consistently challenged:

MYTH: AI can replace human security personnel:

The reality: AI technology can automate repetitive and mundane tasks, allowing human security personnel to focus on more complex and strategic activities. However, human judgment, intuition, and decision-making skills are still crucial in most security scenarios. AI can assist in augmenting human capabilities and improving efficiency, but it requires human oversight, maintenance, and interpretation of results.

MYTH: AI-powered surveillance systems are highly accurate and reliable:

The reality: AI systems make mistakes. They are trained based on historical data and patterns, and their accuracy heavily relies on the quality and diversity of the training data. Biases and limitations in the data can lead to biased or incorrect outcomes. Moreover, AI systems can be vulnerable to attacks where malicious actors intentionally manipulate the system's inputs to deceive or disrupt its functioning.

MYTH: AI can predict security incidents:

The reality: AI can analyse large amounts of data and identify patterns that humans might miss, but it is not capable of predicting security incidents. AI systems rely on historical data and known patterns, and they may struggle to detect novel or evolving threats. Additionally, security incidents can involve complex social, cultural, and behavioral factors that may be challenging for AI algorithms to fully understand and address.

MYTH: AI technology is inherently secure:

The reality: While AI can be used to enhance security measures, the technology itself is not immune to security risks. AI systems can be vulnerable to attacks, such as data poisoning, model evasion, or unauthorised access to sensitive information. It is crucial to implement robust security measures to protect AI systems and the data they rely on.

Striking a balance

As with any new technology, acknowledging the risks of AI doesn’t eliminate its potential benefits. With judicious application and proper oversight, AI can increase efficiency and security while also minimising negative impact.


techUK’s Cyber Security Week 2023 #techUKCyber2023

The Cyber Programme team are delighted to be hosting our annual Cyber Security Week between 9-13 October.

Click here to read all the insights

Join us for these events!

11 October 2023

Cyber Innovation Den 2023

Central London Conference

Cyber Security Programme

The Cyber Security Programme provides a channel for our industry to engage with commercial and government partners to support growth in this vital sector, which underpins and enables all organisations. The programme brings together industry and government to overcome the joint challenges the sector faces and to pursue key opportunities to ensure the UK remains a leading cyber nation, including on issues such as the developing threat, bridging the skills gap and secure-by-design.

Learn more

Join techUK's Cyber Security SME Forum

Our new group will keep techUK members updated on the latest news and views from across the Cyber security landscape. The group will also spotlight events and engagement opportunities for members to get involved in.

Join here

Upcoming Cyber Security events

Cyber Security updates

Sign-up to get the latest updates and opportunities from our Cyber Security programme.

 

 

 

Related topics

Authors

Nick Smith

Nick Smith

Business Development Manager , Genetec