The Information Technology Laboratory (ITL) AI program at NIST, working with both private and public partners, has developed a framework to help manage the risks posed by artificial intelligence (AI) to people, organizations, and society. The NIST AI Risk Management Framework (AI RMF) is intended to be used voluntarily and helps organizations incorporate trustworthiness into the design, development, use, and evaluation of AI products, services, and systems.  

The framework was released on January 26th, 2023, after a process that welcomed input from many sources. This included public comments on draft versions, workshops, and requests for information. The framework is designed to support and align with other efforts to manage AI risks.  

NIST has also published a companion AI RMS playbook, an AI RMS roadmap crosswalk, and several perspectives.  

On March 30, 2023, NIST launched the Trustworthy and Responsible AI Resource Center to help organizations use the AI RMS and to encourage international alignment. You can find examples of how others are using the AI RMF on the AI RC’s use case page.  

On July 26th, 2024, NIST released NIST-AI-600-1 Artificial Intelligence Risk Management Framework Generative Artificial Intelligence Profile. This profile helps organizations identify the unique risks of generative AI and suggests actions to manage them in line with their goals and priorities.  

On April 7, 2026, NIST released a concept note for an AI RMF profile focused on trustworthy AI in critical infrastructure. This profile will guide operators of critical infrastructure in choosing risk management practices when using AI-enabled tools.  

You can view public comments on earlier drafts of the AIRMF and requests for information on the AIRMF development page.  

To improve safety, security, reliability, capacity, and efficiency, the nation’s critical infrastructure will increasingly depend on technologies such as artificial intelligence (AI) across IT, OT, and industrial control systems. Using AI in these important areas requires systems that can be trusted. The NIST AI Risk Management Framework (AI RMF) was created to help organizations build trust in AI systems throughout their lifecycle, enabling them to benefit from AI while managing risks.  

As part of its strategy for American technology leadership, the NIST Information Technology Laboratory (ITL) is helping critical infrastructure sectors by developing the AI RMF, a trustworthy AI profile for critical infrastructure. This profile will guide operators in choosing risk management practices when using AI. It will also help them clearly share their trustworthiness requirements with teams, developers, and other stakeholders throughout the AI and critical infrastructure lifecycles and supply chains.  

NIST AI RMF Profile: Trustworthy AI in Critical Infrastructure Community of Interest 

NIST welcomes collaboration with industry user groups, regulators, policymakers, academia, and the wider community. By working together, NIST aims to develop a profile that gives critical infrastructure sectors greater confidence in using AI agents and tools. The profile will also give developers and vendors guidance and certainty to support the creation of innovative, trustworthy solutions.  

NIST is forming a trust for AI in the critical infrastructure profile community of interest to gather feedback. Participation is open to everyone in the critical infrastructure ecosystem, including all sectors, roles, and supply chain partners.  

You can sign up for our mailing list and join our upcoming community Slack channel, where NIST will host informal discussions and ask for real-time feedback. All important announcements will be shared through the mailing list. 

Source: AI Risk Management Framework 

Amazon

Leave a Reply

Your email address will not be published. Required fields are marked *