Appropriate Transparency and Explainability Principle
- Jurisdiction
- UK
UK AI regulatory principle requiring that AI systems be appropriately transparent and explainable, with levels proportionate to the risks presented.
Key Definitions:
- Transparency: Communication of appropriate information about an AI system to relevant people (how, when, and for which purposes it's used)
- Explainability: The extent to which relevant parties can access, interpret and understand the decision-making processes of an AI system
Implementation Requirements:
- Sufficient information for regulators to give meaningful effect to other principles
- Information access for parties directly affected by AI systems to enforce their rights
- Product labeling and information provision as required by regulators
- Reference to technical standards (IEEE 7001-2021, ISO/IEC TS 6254, ISO/IEC 12792)
Context-Specific Application: The level of explainability needed varies significantly by context. A technical expert designing self-driving vehicles needs detailed understanding of decision-making capabilities, while a lay person may only need information for safe use. Regulators may require different levels of explanation to allocate responsibility for harmful outcomes.
Rationale: Transparency increases public trust and AI adoption. When AI systems lack sufficient explainability, suppliers and users risk breaking laws, infringing rights, causing harm, and compromising security. The principle recognizes that explainability remains a technical challenge and that appropriate levels must consider context, risk, and state of the art.