Photo of Kate M. Growley, CIPP/G, CIPP/USPhoto of Adelicia R. CliffePhoto of Michael AtkinsonPhoto of Jonathan M. BakerPhoto of Laura J. Mitchell BakerPhoto of Michelle Coleman

On March 17, 2022, the National Institute of Standards and Technology (“NIST”) published an initial draft of its Artificial Intelligence (AI) Risk Management Framework (“AI RMF”) to promote the development and use of responsible AI technologies and systems.  When final, the three-part AI RMF is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.  NIST has only developed the first two parts in this initial draft:

  • In Part I, Motivation, the AI RMF establishes the context for the AI risk management process.  It provides three overarching risks & characteristics that should be identified and managed related to AI systems: technical, socio-technical, and guiding principles.
  • In Part II, Core and Profiles, the AI RMF provides guidance on outcomes and activities to carry out the risk management process to maximize the benefits and minimize the risks of AI.  It states that the core comprises three elements: functions, categories, and subcategories.  The initial draft examines how “functions organize AI risk management activities at their highest level to map, measure, manage, and govern AI risks.”

The forthcoming Part III will provide guidance on how to use the AI RMF—like a practice guide—and will be developed from feedback to this initial draft.

Overall, the goal of the AI RMF is to be used with any AI system across a wide spectrum of types, applications, and maturity, and by individuals and organizations, regardless of sector, size, or level of familiarity with a specific type of technology.  That said, NIST cautions that the AI RMF will not be a checklist and should not be used in any way to certify an AI system.  Similarly, it may not be used as a substitute for due diligence and judgment by organizations or individuals in deciding whether to design, develop, and deploy AI technologies.

Along with the AI RMF, the NIST also released Special Publication 1270 outlining standards to address bias in AI, titled “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence” (“Guidance”).  NIST’s stated intent in releasing the Guidance is “to surface the salient issues in the challenging area of AI bias, and to provide a first step on the roadmap for developing detailed socio-technical guidance for identifying and managing AI bias.” Specifically, the Guidance:

  • describes the stakes and challenges of bias in AI and provides examples of how and why it can chip away at public trust;
  • identifies three categories of bias in AI—systemic, statistical, and human—and describes how and where they contribute to harms; and
  • describes three broad challenges for mitigating bias—datasets, testing and evaluation, and human factors—and introduces preliminary guidance for addressing them.

The Guidance provides a number of helpful recommendations that AI developers and risk management professionals may consider to help identify, mitigate, and remediate bias throughout the AI lifecycle.

At the direction of Congress, NIST is seeking collaboration with both public and private sectors to develop the AI RMF.  NIST seeks public comments by April 29, 2022, which will be incorporated in the second draft of the AI RMF to be published this summer or fall.  In addition, from March 29-31, 2022, NIST is holding a two-part workshop on the AI RMF and bias in AI.