Back
conceptUpdated Apr 18, 2026
Secure and Resilient AI
trustworthy-aicybersecurity
- Jurisdiction
- US-Federal
Secure and resilient AI systems maintain confidentiality, integrity, and availability while withstanding unexpected adverse events or changes in their environment. These related but distinct characteristics are essential for trustworthy AI.
Security: AI systems that prevent unauthorized access and use through protection mechanisms. Common AI-specific security concerns include:
- Adversarial Examples: Inputs designed to cause misclassification
- Data Poisoning: Malicious manipulation of training data
- Model Extraction: Unauthorized access to models or training data
- Intellectual Property Theft: Exfiltration through AI system endpoints
Resilience: The ability to:
- Withstand unexpected adverse events or environmental changes
- Maintain functions and structure despite internal and external changes
- Degrade safely and gracefully when necessary
- Return to normal function after disruption
Key Differences:
- Resilience focuses on recovery and adaptation after adverse events
- Security encompasses resilience plus protocols to avoid, protect against, respond to, and recover from attacks
- Resilience extends beyond data provenance to include unexpected or adversarial use of models
Implementation Approaches:
- Apply existing frameworks like the NIST Cybersecurity Framework and Risk Management Framework
- Implement AI-specific security measures for unique attack vectors
- Design systems to fail safely under adverse conditions
- Establish monitoring and response capabilities
- Plan for both technical and operational resilience
Secure and resilient AI requires ongoing attention as threat landscapes evolve and new vulnerabilities emerge in AI systems and their deployment environments.
Neighborhood