Loading...
Loading...
Governments regulate. Companies build. Academics advise. Civil society advocates. But who decides what AI should value? The question everyone avoids.
Claim: Democratic mandate to regulate in public interest
Actual power: Can regulate deployment, impose requirements, levy fines
Limitation: Lack technical expertise; regulation lags innovation; jurisdiction-limited
Claim: Build the systems, understand the capabilities
Actual power: Make daily decisions about training, deployment, safety measures
Limitation: Incentives may conflict with public interest; no democratic mandate
Claim: Expertise in ethics, safety, alignment research
Actual power: Influence through research, media, advisory roles
Limitation: No implementation authority; views often contested among experts
Claim: Represent affected communities and public interest
Actual power: Advocacy, public pressure, coalition building
Limitation: No direct authority over systems; fragmented voices
The current state of AI governance leaves critical gaps:
No single authority
Multiple actors claim governance roles, but none has clear authority over what AI should value.
Values implicitly set
Training data, reward functions, and deployment choices embed values without explicit authorization.
Accountability diffuse
When AI causes harm, it's unclear who authorized the behavior that caused it.
Conflicts unresolved
When different stakeholders want different things, there's no mechanism to determine whose preference prevails.
Rather than governing AI systems after deployment, pre-governing establishes authority structures before delegation:
Explicitly define who has authority to make which decisions about AI systems.
Require explicit authorization for values embedded in training and deployment.
Create traceable paths from AI behavior to human decision-makers.
Key insight: The question isn't "how do we govern AI?" but "who has authority to decide what AI should do—and on what basis?"
No single entity. Governments set regulations but lack technical expertise. Companies make daily decisions but lack democratic mandate. Academics advise but can't implement. Civil society advocates but has no direct authority. The result is fragmented governance with no clear answer to 'who decides what AI should do?'
Governments can regulate deployment (what companies must do) but struggle to govern intent (what AI should value). Regulation requires understanding technology that evolves faster than law, addressing cross-border systems, and making value choices that democracies haven't explicitly debated.
Companies make daily governance decisions (training choices, safety measures, deployment policies) but lack democratic mandate to make value choices for society. Their incentives may conflict with public interest, and 'self-governance' has historically failed in other industries.
IRSA's concept for addressing AI governance before delegation. Rather than governing AI systems after deployment, pre-governing establishes: who has authority to make which decisions, what values they're authorized to embed, and how accountability flows when things go wrong.