16 Apr 2025
by David Brauchler III

Guest blog (NCC Group): Navigating AI Risks: Essential Cyber Security Measures for CISOs

Learn more about how to best to deal with risks associated with AI

With the rapid deployment of Artificial Intelligence (AI) and Large Language Models (LLM) across virtually every business sector and use case, CISOs are rightfully concerned. With any new technology comes new threats, and just as companies are developing, testing and evolving AI capabilities, rest assured that threat actors are doing the same.

While we inherently know AI introduces risk, the exact threat vectors are not well established, and many are theoretical—we know it’s possible, even if we haven’t seen it in the wild yet. Because business use of AI is still so new, threat research and mitigation practices are still in nascent stages. But every day, organisations are rolling out vulnerable systems just waiting to be exploited. All it takes is for someone to notice.

Given this uncertainty, CISOs need to be aware of the potential threats and organisational impacts and prioritise building resilience before their AI utilisation exceeds their risk tolerance—or worse, invites a breach.

With AI, there's a risk that trust can go out the window

AI is different from many of the emerging technologies. When you build a web app, for example, you made it, you know how it works and you control it, therefore you can trust it. AI, on the other hand, interacts with and operates across so many data sets, applications and users, both trusted and untrusted. And it can be manipulated to wreak havoc even when built on the most trusted components.

Organisations are only now recognising the risks, and we’re noticing a pattern of recurring vulnerabilities. Here are just a few we’ve seen:

  • Prompt inclusion. In most instances of AI, user prompts become part of the model—a fact many casual users are unaware of. Unless specifically excluded (like in the case of Microsoft Copilot), prompts are used to train the LLM, and users should assume this unless assured otherwise. That means if users upload proprietary data for analysis, it becomes irrevocably part of the data set. It’s essentially impossible to extract. This ties directly to the notion of “shadow AI,” where employees are using third-party LLMs without knowledge or authorisation by IT teams—a practice that has led to serious data leaks. Even for models that claim to exclude prompts, it’s best to exercise discretion. Training users on this important fact is essential guidance.
  • Prompt injection variants. As the use cases for AI grow, so too are the variations on compromise. One example is what OWASP calls “indirect prompt injection,” when an attacker hijacks the conversation between the AI and the user. Cross-user prompt injection is also becoming more common. In this variation, instead of telling the AI to do something malicious, bad actors will prompt the AI to do something malicious to another user’s account—like delete it, for example. This is a massive vulnerability caused by failure to properly isolate data submitted by potential threat actors from the data the AI is reading from trusted users.
  • Data poisoning. LLMs operate by repeating patterns, but they can’t discern good patterns from bad. Data sets can be poisoned by bad actors so that, with the right code or phrase, they can control the model and direct it to perform malicious actions—and your company may never know. That’s what happened with Microsoft’s Tay chatbot which was poisoned by prompts and Microsoft had to quickly shut it down.
  • Model extraction or inversion. In this attack, a bad actor can prompt the model to either extract your data or even duplicate the functionality or clone the model itself. That means, if you train the model on anything sensitive, threat actors can steal that data, even if they don’t have direct access to it. That’s why models trained on proprietary data should never be exposed to external parties. While this attack is academic right now, it could easily be exploited in the absence of proper segmentation.
  • Data pollution. In a similar scenario, threat actors can take advantage of models that interact with live data from untrusted sources and intentionally pollute that data. This introduces a wide range of vulnerabilities, the least of which is inaccurate results. For example, an LLM that scrapes Amazon product reviews can be polluted with falsified reviews, skewing analysis and output. In some cases, LLM agents exposed to malicious data can themselves become agents of that data. To prevent this, CISOs need to be aware of and cautious about the data their LLMs are exposed to.
  • Excessive agency. One of the most severe vulnerabilities, this threat occurs when a model is given access to privileged functions that a user shouldn’t have access to. This gives users the ability to manipulate the model to escalate their own privileges and access privileged functions.

New threats. Same fundamentals.

Because AI is, for the most part, open source and widely available for anyone to learn, we should anticipate more of these attacks as AI adoption skyrockets. And it doesn’t take sophisticated nation-state actors or someone with vast experience in deep technical exploitation chains.

The good news is, defending against AI model attacks requires essentially the same security fundamentals CISOs have been leveraging for years, with a few new twists. While there are some frameworks to guide developers—ISO 42001 standardises how organisations should implement AI systems and Europe has just introduced the AI Act—these aren’t holistic nor broadly applicable enough.

For companies that are figuring it out as they go, here are some best practices to consider:

  • Educate employees about the risks of even casual AI use in the workplace. Unfortunately, you can never fully trust it, and they should approach every interaction with that assumption.
  • Prioritise security by design. Developers need to think carefully about who and what the model will be exposed to and how that can influence its behaviour. Security should be part of the process from inception, not as an afterthought. Don’t assume the model will always behave the way you expect.
  • Conduct threat modelling. Just as you would with any new introduction to your technology landscape, perform a threat analysis on AI tools. Identify what the model has access to, what it’s exposed to, and how it’s intended to interact with other components or applications. Understand risks, data flows and the threat landscape and implement trust boundaries.
  • Consider multi-directional access. Because there are so many touchpoints, most organisations don’t realise the full scope of the risk. While User A may not have ill intent, User B can manipulate User A’s account to control the model (horizontal threat), or the model can be manipulated to escalate functionality or privileges (vertical threat). Where two models intersect—a text-based model that interacts with an image generation model, for example—this opens a multi-modal risk when channels are left open to leak data across one another to the end user.
  • Deploy data-code segmentation. Anytime you expose a ML model to untrusted data, the model becomes an agent of that data. The solution: segment models from the data using a “gatekeeper” approach that prevents the model from accessing untrusted data and trusted functions at the same time.

Even if the fundamentals are the same, for many CISOs, the timeline has accelerated. Rapid adoption calls for urgent solutions before things get completely out of hand.

Conclusion

As AI continues to evolve, so too will the threats associated with it. CISOs must stay vigilant and proactive in addressing these new vulnerabilities. By educating employees, prioritising security by design, conducting thorough threat modelling, considering multi-directional access, and deploying data-code segmentation, organisations can better protect themselves against the risks posed by AI. Collaboration with industry experts and staying informed about the latest developments in AI security will also be crucial in maintaining a robust security posture. Learn more with securing AI.


ai_icon_badge_stroke 2pt final.png

techUK - Seizing the AI Opportunity

The UK is a global leader in AI innovation, development and adoption. The economic growth and productivity gain that AI can unlock is vast, but to fully harness this transformative opportunity, immediate action is required. techUK and our members are committed to working with the Government to turn the AI Opportunities Action Plan into reality. Our aim is to ensure the UK seizes the opportunities presented by AI technology and continues to be a world leader in AI development. 

techUK runs a full calendar of activities including events, reports, and insights to demonstrate some of the most significant AI opportunities for the UK. Our AI Hub is where you will find details of all upcoming activity. We also send a monthly AI newsletter which you can subscribe to here.

Upcoming AI events

Latest news and insights

Sign-up to get the latest updates and opportunities across Technology and Innovation & AI.

Contact the team

Usman Ikhlaq

Usman Ikhlaq

Programme Manager - Artificial Intelligence, techUK

Learn more about our AI campaign:

Seizing the AI Opportunity generic AI campaign card.jpg

 

 

Authors

David Brauchler III

David Brauchler III

Technical Director , NCC Group