Navigating the Complexities of AI Governance: The Importance of the GOVERN and MAP Functions in the NIST AI RMF 1.0
Artificial intelligence (AI) has become an integral part of many industries and organizations, and as such, it is crucial to manage the risks associated with its use. The National Institute of Standards and Technology (NIST) has developed the AI Risk Management Framework (AI RMF) to help organizations manage these risks. The AI RMF is a comprehensive framework that provides a structured approach for organizations to identify, assess, and mitigate risks associated with AI systems.
One of the key elements of the AI RMF is the "GOVERN" function, which provides organizations with the opportunity to clarify and define the roles and responsibilities for the humans in the Human-AI team configurations and those who are overseeing the AI system performance. This function also creates mechanisms for organizations to make their decision-making processes more explicit, to help counter systemic biases.
Another key element of the AI RMF is the "MAP" function, which suggests opportunities to define and document processes for operator and practitioner proficiency with AI system performance and trustworthiness concepts, and to define relevant technical standards and certifications. Implementing MAP function categories and subcategories may help organizations improve their internal competency for analyzing context, identifying procedural and system limitations, exploring and examining impacts of AI-based systems in the real world, and evaluating decision-making processes throughout the AI lifecycle.
The AI RMF also emphasizes the importance of interdisciplinarity and demographically diverse teams and utilizing feedback from potentially impacted individuals and communities. AI actors called out in the AI RMF who perform human factors tasks and activities can assist technical teams by anchoring in design and development practices to user intentions and representatives of the broader AI community, and societal values. These actors further help to incorporate context-specific norms and values in system design and evaluate end user experiences – in conjunction with AI systems.
AI risk management approaches for human-AI configurations will be augmented by ongoing research and evaluation. For example, the degree to which humans are empowered and incentivized to challenge AI system output requires further studies. Data about the frequency and rationale with which humans overrule AI system output in deployed systems may be useful to collect and analyze.
The AI RMF has several key attributes that guide its development. It is risk-based, resource-efficient, pro-innovation, and voluntary. It is consensus-driven and developed and regularly updated through an open, transparent process. It uses clear and plain language that is understandable by a broad audience, while still of sufficient technical depth to be useful to practitioners. It provides a common language and understanding to manage AI risks and it is easily usable and fit well with other aspects of risk management. It is useful to a wide range of perspectives, sectors, and technology domains, it is outcome-focused and non-prescriptive, it takes advantage of and fosters greater awareness of existing standards, guidelines, best practices, methodologies, and tools for managing AI risks, it is law- and regulation-agnostic, and it is a living document.
In conclusion, the AI RMF is a comprehensive framework that provides a structured approach for organizations to identify, assess, and mitigate risks associated with AI systems. It emphasizes the importance of interdisciplinarity and demographically diverse teams, and utilizing feedback from potentially impacted individuals and communities. It also provides a common language and understanding to manage AI risks, and it is easily usable and fit well with other aspects of risk management. The AI RMF is a living document that will be updated as technology, understanding, and approaches to AI trustworthiness and uses of AI change, and as stakeholders learn from implementing AI risk management generally and this framework in particular.
• The AI Risk Management Framework (AI RMF) is a set of guidelines for managing risks associated with the use of artificial intelligence (AI) systems.
• The AI RMF is divided into four functions: IDENTIFY, PROTECT, DETECT, and RESPOND, which correspond to different stages of the AI system lifecycle.
• The IDENTIFY function focuses on understanding the risks associated with an AI system, including the potential for harm and the likelihood of that harm occurring.
• The PROTECT function focuses on mitigating the risks identified in the IDENTIFY function, including through the use of technical and administrative controls.
• The DETECT function focuses on monitoring the AI system for potential risks and anomalies, and responding quickly to any issues that are identified.
• The RESPOND function focuses on managing the consequences of any risks that are identified, including through the use of incident response plans and other mitigation measures.
• The GOVERN and MAP functions of the AI RMF are also important to consider, they provide organizations with the opportunity to clarify and define the roles and responsibilities of humans in the Human-AI team configurations and those who are overseeing the AI system performance.
• The AI RMF strives to be risk-based, resource-efficient, pro-innovation, and voluntary and developed through an open, transparent process.
• The AI RMF should be understandable by a broad audience, provide a common language and understanding to manage AI risks and be easily usable and adaptable as part of an organization’s broader risk management strategy and processes.
• The AI RMF should be useful to a wide range of perspectives, sectors, and technology domains and be outcome-focused and non-prescriptive.
• The AI RMF should take advantage of and foster greater awareness of existing standards, guidelines, best practices, methodologies, and tools for managing AI risks and be law- and regulation-agnostic.
• The AI RMF should be a living document, be readily updated as technology, understanding, and approaches to AI trustworthiness and uses of AI change.
Links:
https://www.nist.gov/news-events/news/2023/01/nist-risk-management-framework-aims-improve-trustworthiness-artificial