The UK’s AI Cyber Security Code of Practice: What It Means for Your Business

On January 31, 2025, the UK government published its AI Cyber Security Code of Practice, a voluntary framework aimed at mitigating security risks in AI systems.
The Code establishes baseline cybersecurity requirements across the AI lifecycle and is expected to inform changes to international standards through the European Telecommunications Standards Institute (ETSI). To assist organizations in applying its principles, the government has also released an Implementation Guide, which expands on specific security measures.
HackerOne offered input during the development of this Code, emphasizing the importance of independent security testing, AI red teaming, and vulnerability disclosure programs (VDPs). HackerOne’s recommendations, submitted during DSIT’s Call for Views on AI Cybersecurity, highlighted the need for external validation, proactive security testing, and structured vulnerability reporting mechanisms to improve AI security.
Who is the Code for?
The Code applies to developers, system operators, and data custodians involved in the creation, deployment, and management of AI systems. It sets out security measures covering five key phases: secure design, secure development, secure deployment, secure maintenance, and secure end of life. AI vendors who solely sell models or components without direct involvement in their implementation are not directly in scope but remain subject to other relevant cybersecurity standards.
How can organizations align with the Code?
The Code introduces 13 principles to safeguard AI from cyber threats, including data poisoning, adversarial attacks, and model exploitation. Organizations that choose to follow the Code need to integrate AI security into system design, assess risks throughout the AI lifecycle, and maintain transparency with end-users. Key provisions include:
- Ensuring AI security awareness among employees and stakeholders.
- Implementing supply chain security measures to prevent vulnerabilities in AI models.
- Conducting adversarial testing to proactively detect security weaknesses.
- Providing timely security updates and clear communication to end-users.
How does the Code address Independent Security Testing and Disclosure for AI?
A key focus of the Code is the requirement for independent security validation systems. Developers must ensure AI models undergo security testing before deployment, and the Code stresses the importance of involving independent security testers with expertise in AI-specific risks.
Additionally, the Code mandates the creation and maintenance of a Vulnerability Disclosure Program (VDP) for AI systems. This program is vital for enhancing transparency, allowing security flaws to be responsibly reported and mitigated.
The Implementation Guide further clarifies these expectations, emphasizing proactive security practices such as red teaming and adversarial testing. These techniques are essential for detecting vulnerabilities before they can be exploited, and the Guide offers practical steps to integrate these evaluations into the AI lifecycle. By following both the Code and the Implementation Guide, organizations can ensure a comprehensive, proactive approach to AI security – focusing on external validation, transparency, and ongoing testing to safeguard systems at every stage.
What’s the likely impact?
The Code signals a shift toward stronger regulatory expectations for AI security. As cyber threats targeting AI continue to evolve, organizations that adopt these security principles will be better positioned to comply with future standards and regulations, protect their users, and build trust in AI technologies.
The UK government has stated its intention for this Code to serve as the foundation for future ETSI standards, ensuring a unified and internationally recognized approach to AI cybersecurity. The government also plans to update the Code and the Guide to mirror the future ETSI global standard, reinforcing the alignment with international best practices.
How HackerOne can help:
Organizations navigating AI security challenges need robust testing and vulnerability management solutions. HackerOne helps organizations align with the Code’s security requirements through:
- Independent AI security assessments that align with Principles 9.1 and 9.2.1.
- Vulnerability Disclosure Programs (VDPs) to help meet Principle 6.4.
- Red teaming and adversarial testing to identify weaknesses before they can be exploited as mentioned in the Implementation Guide, sections 9.2, 9.2.1, and 11.2.
Contact HackerOne to learn more about securing your AI systems.