On 11 March 2025, the EU Commission published its third draft of the General-Purpose AI (GPAI) Code of Practice. The Code aims to detail the manner in which providers of GPAI models and GPAI models with systemic risk, may comply with their obligations under the EU’s AI Act. Adhering to the code can provide a presumption of conformity with the EU’s AI act.
As a reminder, under Article 2 of the AI Act, the law (and by extension these Code of Practice guidelines) applies to providers “irrespective of whether those providers are established or located within the EU or in a third country”as long as the output produced by the AI system is used in the EU.
The Code is drafted through four working groups chaired by independent experts and involves over 1000 stakeholders ranging from from businesses, EU Member States, civil society, and European and international observers.
The four Working Groups cover the following topics:
Working Group 1: Transparency and copyright-related rules
Working Group 2: Risk assessment for systemic risk
Working Group 3: Technical risk mitigation for systemic risk
Working Group 4: Governance risk mitigation for systemic risk
The following sections summarise the contents addressed in all these groups.
Transparency and Copyright related issues
The latest draft indicates sets out transparency obligations for all providers of GPAI models (except certain open-source models, unless they are deemed systemic-risk models). Signatories to the code commit to drawing up and keeping up-to-date model documentation and share relevant information with downstream AI system integrators and with the EU’s AI Office upon request. The code aims to facilitate this by providing a standardised Model Documentation Form. Providers are also responsible for ensuring the quality, security, and integrity of the information they document
On copyright, the code requires all GPAI model providers to uphold EU Copyright Law during the development and deployment of AI models through the establishment and implementation of a copyright compliance policy. Such policy includes using state-of-the-art methods to identify and honour any reserved rights in training data. Several more detailed measures are listed under this policy: for instance, it differentiates between handling of web-crawled content vs. other protected data, requires steps to mitigate the risk of the AI model generating infringing outputs, and mandates a point of contact for copyright complaints.
Risk assessment for systemic risk
If a GPAI model has been classified as a systemic risk, the code imposes extensive risk management duties on the provider. These duties include the adoption of a comprehensive safety and security framework that details the provider’s procedures for risk assessment, risk mitigation, and governance so as to keep systemic risks at a level deemed acceptable. This risk assessment needs to be conducted throughought the product’s lifecycle. The code of practice breaks down this risk assessment process into multiple steps:
Identifying and characterising significant systemic risks
Performing a “rigorous analysis” of those risks to gauge their severity and likelihood
Evaluating whether each risk is acceptable or requires mitigation
Technical risk mitigation for systemic risk
When it comes to technical risk mitigation, providers must implement appropriate safety measures and security safeguards. Signatories to the Code commit to implementing technical safety mitigation that could arise along the entire model lifecycle. Such measures could include techniques to make the model more robust or align its behaviour, as well as cybersecurity controls to prevent leaks or misuse of powerful AI models.
Governance risk mitigation for systemic risk
The code also places strong emphasis on oversight and accountability. The Draft Code calls for regular Safety and Security Model Reports that document compliance with the Code. This should be done periodically to ensure governance keeps pace with new developments. Another important thing the code puts forward is the need for companies releasing systemic risk models to make use of external independent experts to evaluate said models. Additionally, providers of systemic risk models should establish channels for incident reporting, whistleblowing, and should notify regulators (the AI Office) with relevant information about their GPAI models (to ensure they are kept in the loop about emergent risks or changes).
Differences with previous draft
The third draft attempts to streamline the structure and add essential details. One major change compared to the second draft is the dropping of Key Performance Indicators as a standalone element. This new draft now focuses on objectives, commitments, and measures. This third draft has attempted to clarify and consolidate many provisions such as the ones relating to copyright (multiple measures to develop a copyright policy are now unified under a single measure). The reorganisation aims to rid the code of redundancies and create further clarity. Another addition is Appendix 2 with recommendations to the AI Office on reviewing and updating the Code over time, reflecting an acknowledgement that the Code will need to evolve alongside technology.
Next steps
The Commission now awaits to receive feedback from stakeholders by Sunday 30 March. According to the Commission’s own timeline, the final version of the Code of Practice is set to be presented and approved in May 2025.
If members have any views or questions, please reach out to [email protected]
Theophile Maiziere
Policy Manager - EU, techUK
Theo joined techUK in 2024 as EU Policy Manager. Based in Brussels, he works on our EU policy and engagement.
Theo is an experienced policy adviser who has helped connect EU and non-EU decision makers.
Prior to techUK, Theo worked at the EU delegation to Australia, the Israeli trade mission to the EU, and the City of London Corporation’s Brussels office. In his role, Theo ensures that techUK members are well-informed about EU policy, its origins, and its implications, while also facilitating valuable input to Brussels-based decision-makers.
Theo holds and LLM in International and European law, and an MA in European Studies, both from the University of Amsterdam.
techUK International Policy and Trade Programme activities
techUK supports members with their international trade plans and aspirations. We help members to understand market opportunities, tackle market access barriers, and build partnerships in their target market. Visit the programme page here.
techUK Report - Enabling Growth and Resilience: the UK Tech Sector in an Uncertain World
New techUK report outlines key policy recommendations to boost the UK’s growth through the tech sector amid global challenges, emphasising resilience, trade leadership, and strategic investment.
Do you have a trailblazer in your team?
Do you work with an innovator or a problem solver?
Do you have an inspirational colleague who deserves the spotlight for their work? The President’s Awards are back for 2025 and open for nominations. All techUK members are encouraged to nominate one colleague.
Our members develop strong networks, build meaningful partnerships and grow their businesses as we all work together to create a thriving environment where industry, government and stakeholders come together to realise the positive outcomes tech can deliver.
Sabina Ciofu is Associate Director – International, running the International Policy and Trade Programme at techUK.
Based in Brussels, she leads our EU policy and engagement. She is also our lead on international trade policy, with a focus on digital trade chapter in FTAs, regulatory cooperation as well as broader engagement with the G7, G20, WTO and OECD.
As a transatlanticist at heart, Sabina is a GMF Marshall Memorial fellow and issue-lead on the EU-US Trade and Technology Council, within DigitalEurope.
Previously, she worked as Policy Advisor to a Member of the European Parliament for almost a decade, where she specialised in tech regulation, international trade and EU-US relations.
Sabina loves building communities and bringing people together. She is the founder of the Gentlewomen’s Club and co-organiser of the Young Professionals in Digital Policy. Previously, as a member of the World Economic Forum’s Global Shapers Community, she led several youth civic engagement and gender equality projects.
She sits on the Advisory Board of the University College London European Institute, Café Transatlantique, a network of women in transatlantic technology policy and The Nine, Brussels’ first members-only club designed for women.
Sabina holds an MA in War Studies from King’s College London and a BA in Classics from the University of Cambridge.
Policy Manager for International Policy and Trade, techUK
Daniel Clarke
Policy Manager for International Policy and Trade, techUK
Dan joined techUK as a Policy Manager for International Policy and Trade in March 2023.
Before techUK, Dan worked for data and consulting company GlobalData as an analyst of tech and geopolitics. He has also worked in public affairs, political polling, and has written freelance for the New Statesman and Investment Monitor.
Dan has a degree in MSc International Public Policy from University College London, and a BA Geography degree from the University of Sussex.
Outside of work, Dan is a big fan of football, cooking, going to see live music, and reading about international affairs.
Tess joined techUK as an Policy and Public Affairs Team Assistant in November of 2024. In this role, she supports areas such as administration, member communications and media content.
Before joining the Team, she gained experience working as an Intern in both campaign support for MPs and Councilors during the Local and General Election and working for the Casimir Pulaski Foundation. As well as working for multiple charities on issues such as the climate crisis, educational inequality and Violence Against Women and Girls (VAWG). Tess obtained her Bachelors of Arts in Politics and International Relations from University of Nottingham.