Loading...
Loading...
Why governance architecture — not model capability — is the binding constraint on AI's transformative promise.
In 1967, computer scientist Gene Amdahl observed something counterintuitive about performance improvement: when you speed up one part of a system, the parts you haven't sped up become the new bottleneck. The faster the improved components run, the more the unimproved ones constrain overall throughput. Acceleration, paradoxically, makes friction more visible.
We are about to learn this lesson at civilisational scale.
Dario Amodei's Machines of Loving Grace describes a world in which artificial intelligence compresses decades of progress in biology, medicine, and economic development into a few years. The argument is serious, grounded, and — on the technical trajectory — increasingly difficult to dismiss. But buried within it is a structural caveat that deserves more attention than it receives.
Amodei identifies “constraints from humans” as one of the fundamental rate-limiters on AI's transformative potential. Laws, approvals, institutional structures, regulatory regimes, liability exposure — these don't dissolve because the models get smarter. If anything, as operational capability accelerates, the coordination overhead required to authorise action becomes the dominant drag. The bottleneck doesn't disappear. It migrates.
This is the Amdahl problem for institutions.
System speedup as a function of AI acceleration factor, for varying governance sequential fractions
At 20% governance sequential fraction, no amount of AI acceleration can push system throughput beyond 5.0x. This falls within the typical institutional range.
Productivity economics has long recognised that something is wrong. Advanced economies have experienced sustained output stagnation despite continued technological investment, capital accumulation, and expanding human capital. The usual explanations — innovation diffusion, demographics, market concentration — are partial at best.
Our research at IRSA introduces a complementary account: Governance Coordination Cost (GCC) — the aggregate temporal, cognitive, and organisational expenditure required to align institutional action with authorised decision pathways under conditions of oversight, compliance obligation, and liability exposure.
GCC is not a new concept in spirit. Transaction cost economics has always recognised that coordination has a price. But the classical frameworks focus on market-to-firm boundaries. What we're measuring is something different: the coordination burden that accumulates within mature institutions as authority becomes fragmented, oversight layers multiply, compliance surfaces expand, and defensive proceduralisation becomes individually rational even when collectively destructive.
The dynamics are structural, not behavioural. When liability exposure is asymmetric — where the cost of under-consultation can end a career, but the cost of adding another review layer is diffuse and invisible — governance structures ratchet upward. Rarely do they contract. Each crisis adds a layer. Each reform adds a layer. Rarely does anyone remove one. The result is a structural ratchet: coordination density drifts upward independent of changes in actual risk or productive capacity.
This is not an AI problem. It predates AI by decades. Any sufficiently mature institution — a hospital, a regulator, a bank, a university — recognises the pattern. The approval chain that once had three steps now has seven. The compliance surface that covered two jurisdictions now covers twelve. The risk committee that met quarterly now meets weekly. Each addition was rational in isolation. The aggregate is a governance architecture that consumes an increasing share of institutional capacity merely to sustain current output. AI doesn't create this problem. AI makes it impossible to ignore, because when operational capability jumps by an order of magnitude overnight, the governance layer that was absorbing 15% of institutional capacity quietly becomes the factor that determines whether the other 85% can move at all.
At the macroeconomic level, this manifests as a peculiar kind of stagnation. Institutions compensate for governance friction not by becoming more efficient, but by adding personnel — compliance officers, legal reviewers, risk managers, coordinators — to sustain output through scale rather than throughput. Aggregate GDP grows, but output per worker doesn't. The coordination overhead is absorbed invisibly, attributed to nothing in particular.
Extend this cross-border, and the problem compounds. When regulatory regimes addressing comparable objectives lack structural interoperability across jurisdictions, firms must reconcile procedural divergence that serves no substantive governance purpose. We've termed this Governance Interoperability Cost (GIC) — a measurable drag on investment velocity, SME scalability, and innovation diffusion that operates entirely independently of regulatory stringency. You can have high standards and high interoperability. Most jurisdictions currently have high standards and low interoperability. The difference is architectural, not political.
How governance layers accumulate over institutional lifetime: ad hoc response vs architectural governance
Machines of Loving Grace names the bottleneck. It doesn't resolve it.
That's not a criticism — the essay is about potential, not implementation. But the implication is significant: the degree to which AI delivers on its transformative promise is not determined solely by model capability. It is determined by whether the institutional layer can keep pace with operational acceleration.
Consider what actually happens when an AI agent is deployed in a consequential organisational context. The agent can execute faster than any human team. It can synthesise more information, identify more options, move more quickly. But every meaningful action still crosses an authority boundary. Every boundary requires a decision. Every decision requires that someone — or something — can answer the question: does this actor have standing to proceed?
The same question applies, and has always applied, to human actors. A procurement officer approving a vendor. A clinician ordering a treatment. A portfolio manager executing a trade. Each of these crosses an authority boundary. Each requires that the institution can answer, structurally and not merely procedurally, whether this actor has standing to take this action in this context. The difference with AI is not that the question changes. The difference is that the question gets asked thousands of times per hour instead of dozens per day.
Today, that question is answered inconsistently, slowly, and often not at all until something goes wrong. Approval workflows cover the anticipated. Everything else is resolved through hierarchy, politics, and retrospective review. This works tolerably when the pace of action is human-scale. It fails structurally when the pace of action is AI-scale.
This is not a compliance problem. It is a governance architecture problem.
The theoretical case for governance architecture reform is timeless. Amdahl's law doesn't have a date on it. GCC has been accumulating for decades.
But the practical urgency is specific to this moment. In the first quarter of 2026, agentic AI systems — autonomous agents that take real actions in real organisational environments — moved from research demonstrations to production deployment. AI agents are writing and committing code, executing financial transactions, drafting and sending communications, and managing operational workflows inside live institutions. This is not prospective. It is current.
The institutions deploying these agents are encountering the governance bottleneck not as a theoretical concern but as an operational crisis. An agent that can write and deploy code in minutes is constrained by an approval process designed for weekly release cycles. An agent that can draft investor communications in seconds is constrained by a compliance review chain designed for quarterly reports. The mismatch between operational capability and governance architecture is no longer an abstract productivity drag. It is a daily, measurable, increasingly costly friction that institutions are attempting to resolve ad hoc, without frameworks, without measurement instruments, and without architectural principles.
The window for principled intervention is narrow. Institutions that solve this ad hoc will build governance debt — expedient workarounds that become structural liabilities. Institutions that solve it architecturally will compound the advantage. The difference between these two outcomes is not inevitable. It is a design choice available right now.
The theoretical work on GCC and GIC establishes the economic case: governance coordination cost is a real, measurable structural variable with macroeconomic implications. The Governance Coordination Index (GCI) provides the measurement instrument — a scalar representation of coordination load per authorised decision, decomposable into escalation depth, approval latency, cross-functional handoff count, documentation overhead, and rework ratio.
These tools serve a diagnostic function. They make governance friction visible, benchmarkable, and strategically legible at board level. They allow institutions to ask, for the first time with rigour: how much of our coordination overhead is structurally necessary, and how much is compounding friction with no marginal legitimacy benefit?
But measurement alone doesn't change the number.
What changes the number is relocating governance from retrospective procedural review to structurally embedded, real-time constraint enforcement. Not compliance checklists applied after the fact. Not approval workflows that cover pre-anticipated actions while leaving everything else to discretion. Instead: authority boundaries that are explicit, encoded, and enforced at the moment of action — whether the actor is human or AI.
In practice, this means an institution defines its delegation of authority as a set of constraints: which actors may take which actions in which domains, under what conditions, with what approval requirements. These constraints are evaluated in real time, at the moment of action. An action within bounds proceeds without friction — no approval queue, no review meeting, no email chain. An action that crosses a boundary triggers a structured escalation: the actor is told what boundary was reached, what authority is required, and how to obtain it. The proof layer — the full audit trail of what was checked, what was allowed, what was escalated — is generated as a byproduct of enforcement, not as additional administrative work.
Critically, this is not control imposed from above. Any governed actor — human or AI — can challenge any constraint through a structured contestation process: file a challenge, present evidence, receive a ruling, obtain a remedy. Governance without contestation is authoritarianism. The architecture must encode the right to question the rules, not merely the obligation to follow them.
The GCI then becomes not just a diagnostic but a performance metric. Coordination overhead is measurable before and after. Governance redesign has a number attached to it. Productivity improvement through institutional architecture becomes legible to boards, treasuries, and regulators in exactly the terms they already understand.
GCI component breakdown: procedural coordination cost vs irreducible human judgment
Total GCI score: 100 per authorised decision
Procedural overhead dominates; human judgment is a minority of total coordination cost
There is a deeper structural question here — whether governance architecture should remain bilateral (each institution building its own, learning in isolation) or evolve toward shared infrastructure where constraint patterns, escalation designs, and coordination improvements compound across institutions. That question warrants its own treatment; what matters for the Amdahl argument is that the transition from ad hoc to architectural governance is the precondition for either path.
Dario's framing of “a country of geniuses in a datacenter” is useful precisely because it makes the bottleneck vivid. A country of geniuses is still constrained by everything the geniuses can't unilaterally decide. Supply chains. Physical experiments. Human clinical trials. And institutional approval structures.
The first three are hard constraints — physics, biology, time. The last one is a design choice.
The institutions that resist redesign will not merely fail to capture the gains from AI acceleration. They will find that the governance ratchet tightens further — because AI-scale action without AI-scale governance produces AI-scale incidents, and each incident adds another layer to the procedural stack. The ratchet doesn't pause for institutions that aren't ready.
The institutions that treat governance architecture as a first-order design problem — defining authority boundaries with precision, enforcing them structurally, measuring coordination cost rigorously, and participating in commons infrastructure that compounds improvements across the network — will find that governance becomes a source of institutional advantage rather than a drag on it.
This is not a prediction about technology. It is an observation about institutional design. The bottleneck has always been there. AI just removed every excuse for not addressing it.
Governance Coordination Cost (GCC)
Trunk III — Authority & Governance
Governance Coordination Index (GCI)
IRSA measurement frameworks
Governance Interoperability Cost (GIC)
Trunk III — Authority & Governance
Authority Capacity Collapse
Trunk III
R-Factor / R-Index
IRSA measurement frameworks
Pre-Governance
Trunk V — Semantic & AI Governance
Roshan Ghadamian is Principal Researcher at the Institute for Regenerative Systems Architecture (IRSA), where his work focuses on governance architecture, institutional authority design, and coordination cost in complex organisations. Related working papers on Governance Coordination Cost (GCC) and Governance Interoperability Cost (GIC) are available at irsa.institute.
Institute for Regenerative Systems Architecture
Roshan Ghadamian, Principal Researcher