Skip to contents
Column

US AI Ethics Policy: What is the NIST Framework?

27-02-2026

As AI technology becomes more deeply embedded in corporate decision-making and social systems, the question is no longer "Should we adopt AI?" but rather "How should we manage it responsibly?" The United States has, relatively early on, adopted a principles- and framework-centered approach over regulation, and at the heart of this approach are the AI ethics and governance standards proposed by the National Institute of Standards and Technology (NIST) . The so-called "NIST AI Framework" is not a law, but rather a structure of questions that organizations using AI must consider.


Market Needs: Why the NIST Framework Is Getting the Attention It deserves.

Global companies are faced with operating identical AI systems within different regulatory environments. Unlike Europe's strict regulatory approach, the US market is relatively flexible, but also places a premium on corporate responsibility. The NIST framework, in this context, is not merely a "minimum standard for compliance," but rather provides investors, customers, and partners with a signal of trust that AI is being used in a controlled manner. Especially in the B2B, public, financial, and healthcare sectors, NIST compliance is effectively considered a measure of technological prowess.

 

Limitations of Existing AI Ethics Discussions

Until now, AI ethics have often remained abstract pronouncements. While values like fairness, transparency, and explainability are important, the real-world question remains, "So what should we check?" When ethics are separated from technology, implementation is bound to be limited. The NIST framework is characterized by translating ethics into the language of risk management. This makes ethics an operational imperative, not an ideal.

 

Core Structure of the NIST AI Risk Management Framework

NIST structures AI ethics into four functions.

  1. Governance asks whether the responsibility, decision-making structure, and policies for AI are clearly defined within the organization. It assumes that AI is not a technology issue, but rather an organizational one.
  2. Map (Contextualization) structurally organizes the purposes and environments for which AI systems are used, who the stakeholders are, and what risks may arise. This step cautions against indiscriminate, general-purpose application.
  3. Measure quantitatively and qualitatively assess AI risks, including bias, accuracy, stability, and security vulnerabilities. This eliminates judgment based on intuition.
  4. Manage defines how to mitigate identified risks, continuously monitor them, and respond to changes. AI emphasizes the importance of "post-deployment management."

This structure is designed as a circular framework rather than a one-time check.

 

Current challenges from a technical, organizational, and security perspective

The biggest barrier companies face when trying to implement the NIST framework in practice is not technical, but organizational. This stems from the language barriers between AI development teams and business, legal, and security departments. Furthermore, when it's difficult to fully explain the internal logic of AI models, they must also consider the balance between transparency and security. NIST acknowledges this reality and is more practitioner-friendly by demanding "reasonable controls and documentation" over "complete explanation."

 

Iropke's approach

Iropke doesn't simply treat the NIST framework as a reference. He reinterprets the Govern-Map-Measure-Manage structure for practical use, based on websites, internal systems, and data flows where AI is actually used. For companies targeting the global market, he uses NIST as a benchmark to design a scalable structure capable of responding to future EU AI Acts and national regulations. The key is not avoiding regulations, but designing capabilities that build trustworthy AI systems.