Back
conceptUpdated Apr 18, 2026

MEASURE Function

risk-assessmentai-testing
Jurisdiction
US-Federal
Issuer
NIST

The MEASURE function in the NIST AI Risk Management Framework employs quantitative, qualitative, or mixed-method tools to analyze, assess, benchmark, and monitor AI risk and related impacts. It uses knowledge from the MAP Function and informs the MANAGE Function.

Key Categories:

MEASURE 1: Appropriate methods and metrics are identified and applied, starting with the most significant AI risks.

MEASURE 2: AI systems are evaluated for trustworthy AI characteristics including validity, reliability, safety, security, transparency, explainability, privacy, and fairness.

MEASURE 3: Mechanisms for tracking identified AI risks over time are in place.

MEASURE 4: Feedback about measurement efficacy is gathered and assessed from domain experts and relevant AI actors.

The MEASURE function emphasizes:

  • Rigorous TEVV processes with documented uncertainty measures
  • Performance benchmarking and formalized reporting
  • Independent review to mitigate internal biases
  • Regular monitoring of deployed systems
  • Tracking of emergent risks and system evolution

Measurement should adhere to scientific, legal, and ethical norms and be conducted transparently. Where tradeoffs exist between trustworthy AI characteristics, measurement provides a traceable basis for management decisions. The function recognizes that new measurement methodologies may need to be developed for AI-specific risks.

Effective measurement requires ongoing collaboration with diverse stakeholders and must evolve as AI technologies, methodologies, and understanding of risks advance.

Neighborhood