Safety, Security and Robustness Principle
- Jurisdiction
- UK
- Issuer
- UK Government
One of five cross-sectoral principles in the UK's AI regulatory framework requiring that AI systems function in a robust, secure and safe way throughout the AI lifecycle, with risks continually identified, assessed and managed.
Key Requirements:
- Technical security and reliable functioning as intended
- Awareness of security threats at different lifecycle stages
- Embedding resilience against threats into systems
- Regular testing and due diligence on system functioning
- Consideration of ncsc principles for securing machine learning models
Implementation Considerations:
- Risk management frameworks for AI lifecycle actors
- Regular model reviews as mitigation strategy
- Reference to technical standards (ISO/IEC 24029-2, ISO/IEC 5259 series, ISO/IEC TR 5469)
- Coordination with other regulators on security guidance
Rationale: AI's broad applications and autonomous capabilities mean significant safety and security impacts across domains. While more apparent in critical sectors like health and infrastructure, safety considerations apply across all regulatory domains. The principle ensures systems are technically secure and function reliably while remaining vigilant to security issues throughout the AI lifecycle.
Regulators must provide proportionate guidance appropriate to their sectors while coordinating with others to ensure coherent implementation.