Loading...
Loading...
EU AI Act regulates risk. NIST manages it. Both focus on how to deploy AI safely—but not who decides what AI should do. Semantic governance addresses this gap.
European Union
Approach: Risk-based regulation
Focus: AI systems categorized by risk level (unacceptable, high, limited, minimal)
Governs: How AI systems are deployed and what safeguards they must have
Limitation: Focuses on systems, not on who decides what goals the systems should pursue
United States (voluntary)
Approach: Risk management
Focus: Managing AI risks across lifecycle (Map, Measure, Manage, Govern)
Governs: How organizations identify and manage AI-related risks
Limitation: Assumes goals are given; focuses on managing risks to achieving them
Framework (not jurisdiction-specific)
Approach: Intent specification
Focus: Making explicit whose values are embedded and with what authority
Governs: Who has authority to specify what AI should do—and on what basis
Limitation: Theoretical framework; requires implementation through other mechanisms
| Dimension | EU AI Act | NIST AI RMF | Semantic Governance |
|---|---|---|---|
| Primary question | What risks does this system pose? | How do we manage this system's risks? | Who decided this system's purpose? |
| Assumes | We know what harms to prevent | We know what goals to pursue | Nothing—makes authority explicit |
| Binding? | Yes (law in EU) | No (voluntary framework) | Framework for any binding mechanism |
| Accountability | Deployer/provider compliance | Organizational risk management | Authority chains to decision-makers |
Both major frameworks govern the "how" of AI deployment:
What's missing: Neither addresses who has authority to decide what goals the AI should pursue. A perfectly compliant, well-risk-managed system can still serve purposes that no one explicitly authorized.
The EU AI Act is the world's first comprehensive AI law, categorizing AI systems by risk level (unacceptable, high, limited, minimal) and imposing requirements accordingly. High-risk systems require conformity assessments, documentation, and human oversight. It focuses on what safeguards systems must have, not who decides what they should do.
The NIST AI RMF is a voluntary framework helping organizations manage AI-related risks through four functions: Map (understand context), Measure (assess risks), Manage (address risks), and Govern (create accountability). It focuses on how organizations handle AI risks, assuming goals are already determined.
Both EU AI Act and NIST assume we know what AI should do—they govern how to do it safely. Semantic governance addresses the prior question: who has authority to decide what AI should do? It makes intent specification explicit rather than implicit in system design.
Yes. EU AI Act provides legal compliance requirements. NIST provides risk management processes. Semantic governance provides the authority layer—who decides what goals the compliant, risk-managed system should pursue. They address different aspects of AI governance.