Back
conceptUpdated Apr 18, 2026
AI Bias
ai-ethicsfairness
- Jurisdiction
- US-Federal
- Issuer
- NIST
AI bias refers to systematic and unfair discrimination in AI systems that can perpetuate and amplify existing inequalities. NIST identifies three major categories of AI bias that must be considered and managed for fair AI with harmful bias managed.
Three Categories of AI Bias:
1. Systemic Bias:
- Present in AI datasets reflecting societal inequalities
- Embedded in organizational norms, practices, and processes
- Reflects broader societal biases and historical discrimination
- Can be perpetuated through data collection and system design choices
2. Computational and Statistical Bias:
- Present in datasets and algorithmic processes
- Often stems from systematic errors due to non-representative samples
- Can result from technical choices in model design and training
- May be introduced through data preprocessing or feature selection
3. Human-Cognitive Bias:
- Related to how individuals or groups perceive AI system information
- Affects decision-making throughout the AI lifecycle
- Omnipresent in design, implementation, operation, and maintenance
- Influences how humans interpret and act on AI system outputs
Key Characteristics:
- Bias can occur without prejudice, partiality, or discriminatory intent
- AI systems can increase the speed and scale of biased outcomes
- Bias exists in many forms and can become ingrained in automated systems
- Managing bias does not automatically ensure fairness
Management Approaches:
- Recognize that bias is broader than demographic balance
- Address multiple types of bias simultaneously
- Engage diverse stakeholders in bias assessment
- Monitor for impacts across different groups and contexts
- Implement bias testing throughout the system lifecycle
Effective AI bias management requires understanding the specific context of use and ongoing engagement with affected communities to identify and address potential harms.
Neighborhood