Back
conceptUpdated Apr 18, 2026
Systemic Risk
ai-safetyrisk-assessmentgeneral-purpose-ai
- Jurisdiction
- EU
- Effective
- 2024-08-01
- Issuer
- European Parliament
Under Article 3(65) of the eu-ai-act-regulation-2024-1689, systemic risk means "a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain."
Types of Systemic Risks
Recital 110 identifies systemic risks including:
- Major accidents and disruptions of critical sectors
- Serious consequences to public health and safety
- Negative effects on democratic processes and public/economic security
- Dissemination of illegal, false, or discriminatory content
- Chemical, biological, radiological, and nuclear risks
- Offensive cyber capabilities
- Effects of interaction and tool use
- Model self-replication capabilities
- Harmful bias and discrimination
- Disinformation and privacy threats
- Chain reaction events with considerable negative effects
Risk Factors
Systemic risks increase with:
- Model capabilities and reach
- Conditions of misuse
- Model reliability, fairness, and security issues
- Level of autonomy
- Access to tools
- Novel or combined modalities
- Release and distribution strategies
- Potential to remove guardrails
Classification Criteria
Under Article 51, models present systemic risk if they have:
- High-impact capabilities (presumed if >10²⁵ FLOPs for training)
- Equivalent capabilities/impact as determined by the Commission
Additional Obligations
Providers of models with systemic risk must:
- Perform model evaluation including adversarial testing
- Assess and mitigate systemic risks
- Report serious incidents
- Ensure adequate cybersecurity protection
- Comply with codes of practice
Neighborhood