On January 26, 2023, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) released its Artificial Intelligence Risk Management Framework (AI RMF). The AI RMF is intended to provide a resource to organizations designing, developing, deploying, or using AI systems to manage risks and promote trustworthy and responsible development and use of AI systems. Although compliance with the AI RMF is voluntary, given regulators’ increased scrutiny of AI, the AI RMF can help companies looking for practical tips on how to manage AI risks.
The AI RMF begins with a discussion of the harms that AI risk management systems should seek to address, which include:
The AI RMF identifies the following characteristics of trustworthy AI systems:
These characteristics are also set forth in other AI legal frameworks that are being developed across the globe, such as the EU’s draft AI Act.
Risk Management Through the AI RMF Core
As an integral part of the framework, NIST outlines four core functions to help companies identify practical steps to manage their AI risk and help to ensure their AI systems include all of the characteristics of a trustworthy AI system:
NIST highlights that governance structures to map, measure, and manage risk should be “continuous, timely, and performed throughout” the lifecycle of creating, implementing, deploying, and monitoring an algorithm, but the specific implementation of these functions is intended to be adapted to each business model.
For more information on implementing NIST’s AI RMF or otherwise implementing an AI risk management program, please contact Laura De Boel, Maneesha Mithal, or any other attorney from Wilson Sonsini’s privacy and cybersecurity practice or artificial intelligence and machine learning practice.
Stacy Okoro contributed to the preparation of this client alert.