Event round-up: Putting AI into Action webinar on the AI trust gap
Chair:
- Usman Ikhlaq - Programme Manager for Artificial Intelligence, techUK
Panellists:
- Nicole Carignan – VP of Strategic Cyber AI, Darktrace
- Pete Rai – Principal Engineer, Cisco
- Sarah Cameron – Legal Director, Pinsent Masons
- Nick Sinnott – Head of AI Centre Domain Excellence, BAE Systems Digital Intelligence
Recording:
Summary:
1. What is AI trust and why is it important?
- Foundational need for trust: AI trust is crucial for ensuring that AI systems perform tasks accurately, securely, and ethically. Trust in AI is needed because competitros, or even adversaries, can adopt AI without concern for safety or accuracy, unlike defenders and good faith actors who must ensure AI systems are reliable and free from vulnerabilities.
- Confidence in data and people: Trust in AI stems from confidence in the underlying data and the competence of those developing and deploying AI systems. Ethical and responsible adoption can unlock digital innovation and address major global challenges.
- Human-centric trust: Trust in AI revolves around predictable outcomes, ensuring that users understand how AI functions and can trust the system when delegating decisions. This human aspect is essential for ensuring AI is perceived as safe.
- Trust as a moving target: Trust evolves as AI systems develop. There are different modes of AI (assistance, augmentation, and autonomy), and trust will need to grow as AI transitions from aiding humans to operating independently.
2. Leading causes of distrust in AI within organisations
- Uncertainty: The evolving nature of AI creates uncertainty about when and how to deploy it effectively, especially given the rapid development of new technologies. Companies are hesitant to adopt AI because of unclear return on investment and its pervasive impact on organisational processes.
- Bias and data integrity: Bias in AI systems, particularly around issues like recruitment or healthcare, is a major concern. Organisations fear perpetuating biases embedded in data. Tools and frameworks exist to mitigate bias, but complete elimination is difficult.
- Security vulnerabilities: There is distrust around the potential risks AI may introduce, including security breaches and data exposure. Customers worry about risk amplification, such as surfacing previously hidden organisational risks through AI.
- Job displacement and ROI: The fear that AI will displace jobs, particularly among vulnerable workers, contributes to distrust. Organisations struggle with adapting processes to new technologies and may not see immediate returns, leading to skepticism.
3. What can organisations and governments do to tackle AI trust issues?
- AI literacy: Increasing AI literacy is crucial for effective governance. The EU AI Act mandates AI education for developers and deployers, which will help ensure compliance and drive governance improvements. The UK should focus on enhancing AI literacy without relying on rigid regulations.
- Government leadership and data privacy: Governments need to lead by example, particularly in areas like public sector AI deployment. Organisations must prioritise data privacy and integrity, ensuring that AI processes data locally and securely to mitigate risks of exposure.
- Security and international cooperation: Governments and organisations should collaborate internationally to establish standards and share intelligence on AI threats. Security, both in terms of system design and operational use, needs to be embedded early in AI development.
- Strategic deployment and business change: Organisations must develop clear strategies for AI deployment, ensuring governance, data management, and business change processes are in place. Trust grows when organisations prepare adequately for AI integration, considering long-term operational impacts.