Back
conceptUpdated Apr 18, 2026
AI System Deployers
ai-operatorsobligationsend-users
- Jurisdiction
- EU
- Effective
- 2024-08-01
- Issuer
- European Parliament
Under Article 3(4) of the eu-ai-act-regulation-2024-1689, a deployer means "a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity."
Key Obligations for High-Risk AI Systems (Article 26)
Deployers must:
- Use systems according to instructions for use
- Assign human oversight to competent persons
- Ensure input data is relevant and representative (where they control it)
- Monitor system operation and report issues
- Keep automatically generated logs (minimum 6 months)
- Inform workers before workplace deployment
- Register use in EU database (public authorities)
- Conduct data protection impact assessments where applicable
- Inform natural persons subject to system use
- Cooperate with competent authorities
Human Oversight Requirements
Deployers must assign oversight to persons with:
- Necessary competence and training
- Appropriate authority
- Adequate support
For biometric identification systems, enhanced oversight requires separate verification by at least two persons (with exceptions for law enforcement where disproportionate).
Fundamental Rights Impact Assessment (Article 27)
Required for:
- Public bodies and entities providing public services
- Banking and insurance entities (for certain systems)
Assessment must include:
- Description of deployment processes
- Time period and frequency of use
- Categories of affected persons
- Specific risks to fundamental rights
- Human oversight implementation
- Risk mitigation measures
Post-Remote Biometric Identification (Article 26(10))
For law enforcement use, deployers must:
- Request judicial/administrative authorization (ex ante or within 48 hours)
- Limit use to specific criminal investigations
- Document each use in police files
- Submit annual reports to authorities
- Ensure no decisions based solely on system output
Information Obligations
Deployers must inform natural persons when they are subject to high-risk AI system use, including:
- System's intended purpose
- Type of decisions made
- Right to explanation
Monitoring and Reporting
Deployers must:
- Monitor system operation based on instructions
- Report risks and incidents to providers and authorities
- Suspend use if risks identified
- Maintain logs for appropriate periods
Neighborhood