UK Launches £200,000 Grants for Systemic AI Safety: Fostering Trust and Safety
The UK government has launched a new grants programme aimed at enhancing society's resilience against potential AI risks. Introduced on October 15, 2024, this initiative is a partnership between the Engineering and Physical Sciences Research Council (EPSRC) and Innovate UK, both part of UK Research and Innovation (UKRI).
The programme's primary focus is to support research addressing various AI-related challenges, including mitigating risks from deepfakes and misinformation, enhancing cybersecurity against AI-driven attacks, and addressing potential AI system failures in critical sectors.
The Funding Details
With a total fund of £8.5 million, the programme's first phase will allocate £4 million to support approximately 20 projects. Each successful project can receive up to £200,000 in funding. This initiative seeks to boost public confidence in AI technology while supporting responsible and trustworthy AI development, aligning with the government's broader strategy to harness AI's potential for economic growth and public service improvement.
The Application Process
The application process is open to UK-based organisations, with a deadline set for November 26, 2024. Importantly, the programme encourages international collaboration, recognising the global nature of AI development and its challenges. Applicants will be evaluated based on the potential impact of their research in addressing AI-related risks.
The call is now open and applicants can apply here by 26 November. AISI will be hosting a Webinar to answer questions about the grant programme, you can sign up via this link: Sign Up Form for the Webinar.
Opportunities for Membership to Engage and Timeline
By involving a diverse range of expertise from both academia and industry, the initiative aims to develop practical solutions that can be implemented in real-world scenarios.
Industry involvement is crucial for ensuring that the research outcomes are relevant and applicable to current and future AI deployments across various sectors.
The timeline for the programme is structured to maintain momentum in this rapidly evolving field. After the application deadline in November, successful applicants will be announced in late January 2025, with the first round of grants awarded in February 2025. This swift process reflects the urgency and importance of addressing AI safety concerns.
Conclusions
This grants programme is a significant step in the UK's approach to AI safety and ethics. By supporting research into systemic AI safety, the initiative aims to identify critical risks and develop long-term solutions for AI deployment in essential sectors such as healthcare and energy services. It reflects a balanced and proportionate approach to regulation, seeking to foster innovation while safeguarding public interests.
Organisations interested in contributing to the development of safe and trustworthy AI systems for the benefit of society are encouraged to apply through the dedicated website. This programme offers a unique opportunity to shape the future of AI safety and ethics, ensuring that the UK remains at the forefront of responsible AI development and deployment.
My focus is on speeding up the adoption of AI across the country so that we can kickstart growth and improve public services. Central to that plan though is boosting public trust in the innovations which are already delivering real change... That’s where this grants programme comes in. By tapping into a wide range of expertise from industry to academia, we are supporting the research which will make sure that as we roll AI systems out across our economy, they can be safe and trustworthy at the point of delivery.
“This grants programme allows us to advance broader understanding on the emerging topic of systemic AI safety. It will focus on identifying and mitigating risks associated with AI deployment in specific sectors which could impact society, whether that’s in areas like deepfakes or the potential for AI systems to fail unexpectedly... By bringing together researcher from a wide range of disciplines and backgrounds into this process of contributing to a broader base of AI research, we’re building up empirical evidence of where AI models could pose risks so we can develop a rounded approach to AI safety for the global public good.”