2 minute read

1.3. Where to Get More Information

By applying recommendations in the AI RMF, organizations will be better equipped to govern, map, measure, and manage the risks of AI. Using the AI RMF may reduce the likelihood and degree of negative impacts and increase the benefits to individuals, groups, communities, organizations, and society. Applying the Framework at the beginning of an AI system’s lifecycle should dramatically increase the likelihood that the resulting system will be more trustworthy – and that risks to individuals, groups, communities, organizations, and society will be managed more effectively. It is incumbent on Framework users to apply the AI RMF functions to AI systems on a regular basis as context, stakeholder expectations, and knowledge will evolve over time and as their AI systems are updated or expanded. NIST’s development of the AI RMF in collaboration with the private and public sectors is directed – and consistent with its broader AI efforts called for – by the National Artificial Intelligence Initiative Act of 2020 (P.L. 116-283), the National Security Commission on Artificial Intelligence recommendations, and the Plan for Federal Engagement in Developing Technical Standards and Related Tools. Engagement with the broad AI community during this Framework’s development informs AI research and development and evaluation by NIST and others.

Part 1 of this Framework establishes the context for the AI risk management process. Part 2 provides guidance on outcomes and activities to carry out that process to maximize the benefits and minimize the risks of AI. A companion resource, the AI RMF Playbook, offers sample practices to be considered in carrying out this guidance, before, during, and after AI products, services, and systems are developed and deployed.

Advertisement

The Framework and supporting resources will be updated, expanded, and improved based on evolving technology, the standards landscape around the globe, and stakeholder feedback. As the AI RMF is put into use, additional lessons will be learned to inform future updates and additional resources.

The AI RMF and the Playbook will be supported by a broader NIST Trustworthy and Responsible AI Resource Center containing documents, taxonomies, toolkits, datasets, code, and other forms of technical guidance related to the development and implementation of trustworthy AI. The Resource Center will include a knowledge base of trustworthy and responsible AI terminology and how those terms are used by different stakeholders, along with documents that provide a deeper understanding of trustworthy characteristics and their inherent challenges. The AI RMF provides high-level guidance for managing the risks of AI systems. While practical guidance published by NIST serves as an informative reference, all such guidance remains voluntary.

This article is from: