On January 26, 2023, the National Institute of Standards and Technology (“NIST”) released guidance entitled Artificial Intelligence Risk Management Framework (AI RMF 1.0) (the “AI RMF”), intended to help organizations and individuals in the design, development, deployment, and use of AI systems. The AI RMF, like the White House’s recently published Blueprint for an AI Bill of Rights, is not legally binding. Nevertheless, as state and local regulators begin enforcing rules governing the use of AI systems, industry professionals will likely turn to NIST’s voluntary guidance when performing risk assessments of AI systems, negotiating contracts with vendors, performing audits on AI systems, and monitoring the use AI systems.

NIST broadly defines an “AI system” as an “engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments.” This broad definition covers many of the commonly used AI-based hiring and recruitment products, such as resume screening software and gamified assessment or selection tests.

The AI RMF is divided into two parts. Part One includes foundational information about AI Systems, including seven characteristics of trustworthy AI systems:

  • Valid and reliable – AI systems can be assessed by ongoing testing or monitoring to confirm that the system is performing as intended.
  • Safe – AI systems should not, under defined conditions, lead to a state in which human life, health, property, or the environment is endangered.
  • Secure and resilient – AI systems and their ecosystems are resilient when they are able to withstand unexpected adverse events or changes in their environment.
  • Accountable and transparent – Information about an AI system and its outputs increases confidence in the system and enables organizational practices and governing structures for harm reduction.  
  • Explainable and interpretable – The representation of the mechanism underlying AI systems’ operation (explainability), and the meaning of an AI systems’ output (interpretability), can assist those operating and overseeing AI systems. 
  • Privacy-enhanced – Anonymity, confidentiality, and control generally should guide choices for AI system design, development and deployment.
  • Fair with harmful bias managed – NIST has identified three major categories of AI bias to be considered and managed: systemic, computational, statistical, and human-cognitive AI bias.

Part Two details the “core” of the AI RMF, which is structured around four functions—each containing categories and subcategories—designed to “enable dialogue, understanding, and activities to manage AI risks and responsibly develop trustworthy AI system.” The four core functions are summarized as follows:

  • Govern – Cultivating and implementing a culture of risk management and outlining processes and organizational schemes to identify and manage risk, as well as understanding, managing, and documenting legal and regulatory requirements involving the AI system.
  • Map – Understanding and documenting the intended purposes and impacts of the AI system, as well as the specific tasks and methods used to implement the AI system.
  • Measure – Evaluating the AI system and demonstrating it to be valid, reliable, and safe. 
  • Manage – Determining whether the AI system achieves its intended purpose, determining whether it should proceed, and ensuring that mechanisms are in place to sustain the value of the AI system.

Part Two also suggests preparing “AI RMF Profiles,” describing implementation of core functions:

  • Use case profiles – Applying core functions to a specific use case, such as an “AI RMF hiring profile” or an “AI RMF fair housing profile.”
  • Temporal profiles – Comparing the current state of an AI risk management activity to a desired target state, revealing gaps to be addressed and management objectives.
  • Cross-sectoral profiles – Covering risks that can be used across different use cases.

Although the AI RMF does not include model templates, organizations should consider preparing AI RMF Profiles to streamline the process of operationalizing and documenting compliance with AI RMF guidance.

EBG will continue to monitor and report on important workplace AI developments in this space.

Back to Workforce Bulletin Blog

Search This Blog

Blog Editors

Authors

Related Services

Topics

Archives

Jump to Page

Subscribe

Sign up to receive an email notification when new Workforce Bulletin posts are published:

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.